Yingkai (Kyle) Sha

Results 8 comments of Yingkai (Kyle) Sha

Have you configured your model to accept two input channels and test it with data that has three channels?

No. Swin-Transformer accepts fixed input sizes only. If you configure them as 64-by-64, then they cannot make predictions on 256-by-256 inputs.

Hi, would you mind providing your code and full error message? I will look into it. When you say grainy, do you mean the images are too coarse? --- it...

If you do this on GPUs, then a possible reason is that your configuration is too big. `[64, 128, 128]` --> `[64, 128, 128, 256]` adds a lot of weights.

@parniash Would you mind sharing your code? I don't think the network would get bigger.

If I plug in: `(128, 128, 3)`, `n_labels=2` Model 1: `Total params: 30,508,410` Model 2: `Total params: 29,812,794` So your second configuration is smaller, there is no problem. I feel...

@georgedeath Thank you for pointing it out. I will experiment with this preprocessing option and update the example accordingly.

I will work on them in the coming days.