SuPreM
SuPreM copied to clipboard
How to apply it to other datasets?
The pretraining-finetuning paradigm has become the most popular method for achieving better task performance at a lower cost. Transfer learning is becoming increasingly important, including in the field of 3D segmentation in computer imaging. However, the current inconsistency in data size and categories presents challenges for transfer learning. For example, the data from TotalSegmentator has only one channel, while the Decathlon dataset has two channels. This discrepancy makes it difficult to finetune models pretrained on TotalSegmentator for use with the Decathlon dataset because directly modifying the model's input channel would result in the pretrained weights not being loadable.
To address this issue, a straightforward and simple method involves processing the images from the Decathlon dataset with two channels separately and then adding the results together on a linear classification projection. However, this simple approach may lead to suboptimal utilization of the pretrained weights. More broadly speaking, how should models pretrained on datasets with 'm' channels be finetuned on datasets with 'n' channels (where m does not equal n)? This remains an unresolved issue.