John1231983

Results 96 comments of John1231983

I think the author has a good explanation. Regarding the dropout, why did not use dropout in imagenet case? it is the big dataset, so we do not need it,...

@superhy and @luonango : Thanks for your explanation. For the second question, given a feature map size of `batch_sizex1xCx2` with convolution kernel `1x1`, how to obtain the output `batch_sizex1xCx1` This...

I see. It looks you enlarger the channel from 1 to C/16 then average them together to 1 vector. And then flatten to 2C and apply FC to obtain attention...

Just final question before close it, Why not do something likes as below? `batch_size x C` concatenates with batch_size x C to obtain `batch_size x 2C`, then reshape to `batch_size...

Thanks for reply. > we are afraid that too much parameter reduction in the channel-wise attention module will lead to performance loss So you was enlarger it from 1 to...

Yes. It is FC. Do you try it before and compare with your approach. Many papers using this way (Senet...) so you can show your benefit between the approaches. I...

Good. And consider to remove seblock after transition block also. We often use seblock after denseblock only. Transition block only helps to reduce feature size

Good job, But the result shows that with and without seblock has similar performance. In full code, you have added the sebock in Trans layer and Dense layer. How about...

Good. That is what I expect. You also can try something as: 1. Sebock in loop only after denseblock ( remove seblock in transittion) 2. Seblock in loop only after...

Hi. I only can download MURA-v1.1 and I have an error when training ``` image = pil_loader(study_path + 'image%s.png' % (i+1)) File "/home/john/anaconda3/lib/python3.6/site-packages/torchvision/datasets/folder.py", line 128, in pil_loader with open(path, 'rb')...