zhengye
zhengye
I meet the same problem on my dataset,i tried deceasing the lr,but it does not works.
Do you have any methods to solve it?I have meet the same question.
下载后的文件是分卷压缩的形式,请按照分卷压缩文件的解压方式解压,之前有很多人都下载解压过没有问题
这个比赛有点久远了,我记忆中是比赛的时候,主办方合并了一些相似的类目,所以直接上类别减少了,并不需要我们处理,给定的标签文件几个类就是几类,具体的数据预处理在readme都写了。
In training, the semantic_score is calucated in [here](https://github.com/zhengye1995/Zero-shot-Instance-Segmentation/blob/45ee140205ce1f4acaaa6e72ddb526c962068378/mmdet/models/bbox_heads/convfc_bbox_semantic_head.py#L213): `semantic_score = torch.mm(semantic_score, self.vec)`. Therefore the test code you mentioned above is in line with training.
In training, the classification loss function is ```F.Cross_entropy(input, target)```, which contains the softmax. Therefore, the semantic_score is also processed with softmax.
I got your problem, in inference: for seen classes, using: ``` seen_scores = torch.mm(scores, self.vec.t()) seen_scores = torch.mm(seen_scores, self.vec) ``` or directly the scores only have the numerical difference, the...
For unseen scores, this line ```unseen_scores = torch.mm(scores, self.vec.t())``` projects the scores back to the semantic space: e.g., scores (100, s) mm vec.t (s, 300) -> (100,300) then this line...
I did not try this before, you can have a try and good luck.
generated during training process automaticly.