AdelaiDet
AdelaiDet copied to clipboard
能否提供一夏Total-Text在DetEval标准下的检测结果
我看大多数论文用的Toal-Text的检测标准一般都是DetEval,而不是IOU0.5,能否提供下DetEval的检测代码呢,非常感谢
@6098669 You may simply modify the evaluation code to add support for DetEval metric, which are both based on the official RRC evaluation scripts. I personally argue that the IoU metric should be more reasonable than DetEval: As pointed out by TIoU metric, the one-to-many (OM) and many-to-one (MO) methods used by DetEval can overcome the problem of inconsistency of the annotation granularity to some extent, but they still involve some unsatisfactory circumstances: