a1b2c3s4d4

Results 12 issues of a1b2c3s4d4

请问怎么进行多尺度训练和测试呀?谢谢

loading annotations into memory... loading annotations into memory... loading annotations into memory... loading annotations into memory... Done (t=15.55s) creating index... Done (t=15.76s) creating index... Done (t=15.56s) creating index... index created!...

您好,我现在用的是单卡3090,没有改动模型里的参数,复现结果是: This is your evaluation result for task 1 (VOC metrics): mAP: 0.7184670471952441 ap of each class: plane:0.8924992637024642, baseball-diamond:0.8166412996024346, bridge:0.5392068130234289, ground-track-field:0.7210017470814063, small-vehicle:0.6668245582289634, large-vehicle:0.821257238017224, ship:0.8659460437061555, tennis-court:0.9088855421686749, basketball-court:0.8568508433788743, storage-tank:0.8430793822881508, soccer-ball-field:0.5813415347777581, roundabout:0.4294881278300593, harbor:0.6725001390245908,...

![image](https://user-images.githubusercontent.com/42224576/168518355-b431d990-08dd-4a1b-8496-99c5c92e8bfe.png) ![image](https://user-images.githubusercontent.com/42224576/168518281-9d35161f-e490-40b8-9a7e-acaadd6b937d.png) 请问BBOXHEAD中smoothL1里面的target形式是(x,y,w,h,theta)吗?这个theta为什么不是【-pi, pi)范围内的呐?谢谢

你好, 请问Oriented R-CNN中将RPN生成的平行四边形proposal调整为有向矩形的代码在哪个文件里呀? 以及将有向矩形的表示转换成(x,y,w,h,theta)的代码具体是在那个文件呀? 谢谢!!

/DOTA_OBBDetection/mmdet/models/roi_heads/roi_extractors/obb/obb_single_level_roi_extractor.py文件中的 def forward(self, feats, rois, roi_scale_factor=None): 这里的feats是指什么呀? 下面是输出的rois,请问每一行的六个数是什么含义呐?感觉像是中点偏移表示中的(x,y,w,h,▲alpha, ▲beta ),但是数值又对不上。。。求解答 # tensor([[0.0000e+00, 1.8100e+02, 9.1600e+02, 2.8410e+01, 1.0993e+01, # -3.1059e+00], # [0.0000e+00, 7.3223e+02, 9.3639e+02, 2.1954e+01, 7.5611e+00, # -1.0460e+00], # [0.0000e+00, 7.9756e+02, 3.0058e+02,...

Excuse me, how should I visualize the heatmaps of classification tasks and localization tasks in object detection respectively? Can you give me some ideas? Thanks!!!

### Model/Dataset/Scheduler description 请问怎么把backbone换成vit?? 谢谢 ### Open source status - [ ] The model implementation is available - [ ] The model weights are available. ### Provide useful links for...

请问应该如何分别可视化目标检中分类任务和定位任务的cam?可以分享一下实现思路吗?谢谢

您好,我想用咱们的代码跑下soda数据集,我把BboxToolkit中DOTAv1.0的类别更改成了soda-a的类别,重新安装BboxToolkit,然后切分soda-a数据集中的图片。将切分后的图片作为数据集导入模型,但是在训练和测试时模型只能统计出第一类的gts,请问这是什么情况呀? 下面时测试的mAP显示: ![image](https://user-images.githubusercontent.com/42224576/225250030-2db525ea-9131-4d19-9793-4ef3d711bec7.png) 下面时pkl文件中的类别id,很明显是有所有类别的标签的: ![image](https://user-images.githubusercontent.com/42224576/225251202-c76c2dbf-0625-4526-886d-102884c08991.png) ![image](https://user-images.githubusercontent.com/42224576/225251319-9e5339c4-e7fe-40e0-a323-ed97a5e90cb1.png) 下面时切图片时测试集的日志文件: ![image](https://user-images.githubusercontent.com/42224576/225251457-0588821f-38a3-43da-a1b6-0b16b6bc861e.png) 能不能麻烦您帮忙解答下这个问题,谢谢?