Bofang Liu

Results 10 comments of Bofang Liu

I got the same error, could please tell me how to resolve the problem. @zhiyaoma

I guess that the cuDNN error about : **CUDNN_STATUS_NOT_SUPPORTED** was caused by the **list** of the `a_b=list(a_b.chunk(self.M,dim=1))` , which not supported to make cuda tensor convert to the form like...

The reason why I used conv2d to implent the fully connected layer is that the author of the SKNet adopted the conv2d. Because there is a bias in the bn...

```python s=self.global_pool(U) z=self.fc1(s) a_b=self.fc2(z) a_b=a_b.reshape(batch_size,self.M,self.out_channels,-1) a_b=self.softmax(a_b) ``` You mean that here? @XUYUNYUN666

It is not necessary, because It depends on the loss function that you used. And I don't get your views about the conv2d instead of fc layer. @XUYUNYUN666

the dimension of the in_channels is the same as the out_channels and see [the code](https://github.com/ResearchingDexter/SKNet_pytorch/blob/1e1ce1a221414f249ac4a898f99ec9e3ffaf09fd/SKNet.py#L47)

the condition may be caused by the size of anchors that anchors'size can't match your detected objects

> both loc_loss and cls_loss are coming nan can u suggest the solution

I suppose u can print the number of positive example and u can adjust the ratios of the anchor according the number