Dataset preprocessing
I want to train my own dataset. So in the image preprocessing, as long as the size of the input image is 224*224 and the input of the model is [bs, channel, 224, 224], is it right?
Thanks for your great contributions, I'm looking forward to your reply.
Hello, you are right. the input of the model is [bs, channel, 224, 224]. There is no strict limit for the size of 224. Actually the bigger the size is, the better the performance will achieve.
Hello, you are right. the input of the model is [bs, channel, 224, 224]. There is no strict limit for the size of 224. Actually the bigger the size is, the better the performance will achieve.
Thanks for your clear reply!!!
Hello, you are right. the input of the model is [bs, channel, 224, 224]. There is no strict limit for the size of 224. Actually the bigger the size is, the better the performance will achieve.
Thanks for your clear reply!!!
Hi, I have prepared dataset as your suggestion, but there's wrong information when i was training. The data image size is 224224 and label is 224224 in one npz file. Here is the error info.
File in line 74, in trainer_synapse: image = image_batch[1, 0:1, :, :] IndexError: index 1 is out of bounds for dimension 0 with size 1. And i print the tensor shape, which is [bs, 1, 224, 224].
And my settings of training: batch:4, image size: 224. GPU:2080Ti *1. I have tried to change the batch num, but it can't work.
Looking forward to your reply soon!
Hello, you are right. the input of the model is [bs, channel, 224, 224]. There is no strict limit for the size of 224. Actually the bigger the size is, the better the performance will achieve.
@Beckschen thank you so much for your work!
I'm working with RGB images. when I look at the tensor, it's dimensions are [bs, 1, 512, 512, 3], and not [bs, 3, 512, 512]
For example, with a batch size of 2, when I print x.size() in line 387 of vit_seg_modelling.py (within the if condition), it prints
[2, 1, 512, 512, 3]. This is causing a problem when we do x = x.repeat(1,3,1,1) in the next line (since we are not passing enough arguments in the repeat function).
Is this because of the random generator in the dataset loading module? Any help here would be greatly appreciated!
Have you got the data set? Could you please send me a copy?@lisherlock@aneeshgupta42
Hello, you are right. the input of the model is [bs, channel, 224, 224]. There is no strict limit for the size of 224. Actually the bigger the size is, the better the performance will achieve.
Thanks for your clear reply!!!
Hi, I have prepared dataset as your suggestion, but there's wrong information when i was training. The data image size is 224224 and label is 224224 in one npz file. Here is the error info.
File in line 74, in trainer_synapse: image = image_batch[1, 0:1, :, :] IndexError: index 1 is out of bounds for dimension 0 with size 1. And i print the tensor shape, which is [bs, 1, 224, 224].
And my settings of training: batch:4, image size: 224. GPU:2080Ti *1. I have tried to change the batch num, but it can't work.
Looking forward to your reply soon!
hey have you found a solution, im facing the same problem.
Did everyone solved this problem?
您的邮件已收到哦我会及时处理哒~