Jieneng Chen
Jieneng Chen
Hello, thanks for your comments! (1) The default channel in Synapse dataset is 1. But line 386-397 in [vit_seg_modeling.py](https://github.com/Beckschen/TransUNet/blob/main/networks/vit_seg_modeling.py) suggests: if x.size()[1] == 1: x = x.repeat(1,3,1,1) So the code...
Hello, you are right. the input of the model is [bs, channel, 224, 224]. There is no strict limit for the size of 224. Actually the bigger the size is,...
It seems that you are not successfully access the key in the pretrained weight. Could you please just replace line 192 of vit_seg_modeling.py: query_weight = np2th(weights[pjoin(ROOT, ATTENTION_Q, "kernel")]).view(self.hidden_size, self.hidden_size).t() with...
> Hello, it seems that the code currently only works on grayscale images. II am interested in processing images with 3 channels (RGB). Has anyone already modified the code accordingly?...
> Hello, I sent an email to you to get the preprocessed database, but maybe because you are too busy to check your mailbox or my email is judged as...
@Rustastra Thanks so much for your contribution! I will run it first, and if everything is fine, we will put it on the README for the convenience of those people...
Hello, sorry for the late reply. Please nudge me in the email if you have urgent question next time. You raise a good question that Google didn't provide the pretrained...
Hello, many thanks for your questions! Sorry for the late reply. 1) 1.1) Please note that the images saved in .npy are normalized. 1.2) Some 2D slices would have no...
Hello, Many thanks for your questions. The patch size you calculated is (1, 1) in feature grids, representing (16, 16) in image level since the image is downsampled 16x through...
Sorry for the late reply. For slice, the shape of ['image'] and sample['label'] is the original shape (512, 512). For volume (e.g. testing cases with ".h5" format), the shape should...