TransUNet
TransUNet copied to clipboard
This repository includes the official project of TransUNet, presented in our paper: TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation.
 I successfully ran the training file on Kaggle, but when running the test file, the error FileNotFoundError: [Errno 2] No such file or folder: '../model/TU_Synapse224/TU_pretrain_R50-ViT-B_16_skip3_bs24_224/epoch_29.pth. '. But my question...
The original code based on 3 channels,and the author copy the input images 3 times to fit the format,so you need to change somewhere to fit your dataset.But I think...
Why are the training results different every time?为什么每次我的训练和测试结果不一样
i am developing transunet architecture using tensorflow with and without resnet50 and no of parameters of Transunet code wth the Image_size=(224,224,3), embed_dim=512, MLP size= 3072, num_head=12, num_transformer_layer=12 is not 105...
Hi, Thanks for this great job. I've met some error during training. All the parameters are default. After some iteration, the loss suddenly become large. Then came the NaN value....
I didn't get a competent train result and I have some questions regarding it.  I think it's because the dataset is not made of purely abdominal images, but instead...
Hello, executing python train.py --dataset owndataset --vit_name R50-ViT-B_16 --batch_size 12 --max_iterations 1000 --max_epochs 350 Any idea for the reasons for this? iteration 755 : loss : 0.232612, loss_ce: 0.031451 iteration...