s2anet icon indicating copy to clipboard operation
s2anet copied to clipboard

Official code of the paper "Align Deep Features for Oriented Object Detection"

Results 47 s2anet issues
Sort by recently updated
recently updated
newest added

Use s2aNet in configs to train the model on the DOTA dataset. At the beginning of training, the loss of the regression branch is NAN. Do you have any suggestions...

when I use the demo_inference to test I got the following error: File "demo/demo_inference.py", line 118, in colormap=dota_colormap) File "demo/demo_inference.py", line 60, in save_det_result dataset = build_dataset(data_test) File "/home/zhangyu/s2anet/mmdet/datasets/builder.py", line...

您好,我想借鉴计算两个旋转框的IOU的代码,请问旋转框(x,y,w,h,θ)是否归一化, θ的角度范围改为[-π/2,π/2)是否有影响? 因为我直接使用的话IOU会出现nan ![image](https://user-images.githubusercontent.com/37444556/173271581-c3598d19-0d91-4bb8-af68-a90052e01de4.png)

之前将backbone更换为swintransorm-tiny之后mAP一直为0,网络检测不到任何目标。当时没有在意,今天将backbone更换为CrossFormer依然出现该种情况,以下是我的配置文件更改部分,希望知道的大佬赐教 ![image](https://user-images.githubusercontent.com/53329020/170447722-72371732-b4a6-44bc-961c-93b9135fda93.png)

我扩大了图片尺寸,应如何对trainval_s2anet.pkl和test_s2anet.pkl作修改?

Does this code support evaluation during training? If I want to evaluate while training, what should I do?

![baocuo](https://user-images.githubusercontent.com/72692177/166401192-b7382f0e-8b51-45cd-995b-e4d32dcc844a.png) ![gengai2](https://user-images.githubusercontent.com/72692177/166401211-66bce8d4-86a6-43a2-9b43-730657cfac6f.png)

非常感谢作者的工作 #### 使用focal_loss不论是自己重新训练的模型还是在作者提供的12epoch模型上训练,结果是这样的,都预测为了第0类。 ![image](https://user-images.githubusercontent.com/44184027/138691651-57d64d78-d3ac-477b-b8c7-a041f5f7f52f.png) #### 这是inference时候的调试结果 ![image](https://user-images.githubusercontent.com/44184027/138831087-663a4a60-6e98-45cf-8000-74aca4723676.png) #### 这是作者给的模型上训练的第13个epoch输出日志,刚开始loss_cls特别大: ![image](https://user-images.githubusercontent.com/44184027/138701783-b9e46868-bbe8-443e-9acd-e7197b017783.png) #### 这是训练过程中focal_loss的输入,随着迭代,target.sum()->0, 没有目标都变成了背景?: ![image](https://user-images.githubusercontent.com/44184027/138831931-39589d27-cd96-4e49-a20f-a2e34c9b6b11.png) #### 后面我换成了crossentropyloss,结果正常,希望作者能够答疑解惑。

![image](https://user-images.githubusercontent.com/38311978/142992844-c04e5893-c382-46c4-b34d-8d0c60ba1256.png) TypeError: __init__() got an unexpected keyword argument 'gt_dir' ![image](https://user-images.githubusercontent.com/38311978/142992934-f4a6b2e4-cfa9-488f-ac82-a302a2d81c76.png) 去掉evaluation就可以运行 寻找原因是代码无法转到DOTADataset中的evaluate,请问有人遇到这样的问题嘛,如何解决

对3*3的卷积,一组offset生成一个维度,即对H*W*256,如果有512组offset(每个offset是3*3*2)送给3*3的AlignConv,输出的特征应该是H*W*512,也就是输出的特征的通道数应该和offset的数量是一致的,是这样吗,还是并非是这样操作?AlignConv输出的特征图的维度和offset的组数之间的对应关系应该是怎么样的?