WeiJin_MUST

Results 7 comments of WeiJin_MUST

> Maybe I should consider using [CLIP](https://github.com/openai/CLIP)? > > 也许我应该考虑使用[CLIP](https://github.com/openai/CLIP)? If you introduce CLIP, the task could change to an Open-vocabulary object detection task. 如果你引入CLIP的话,是不是就变成Open-vocabulary的问题了?

yes, same problem. Did you solve it? My lab is 1000M Broadband but when I download these datasets, it is just 15kb/s. I don't know why.

我好像发现问题了AP@K是mAPk,那mAP是指包括未知类的平局精度,所以这么低?

> 您好 !您复现的似乎比官方给的链接里的测试文档效果要好一点,能否问问您是怎么做到的吗? ![image](https://user-images.githubusercontent.com/92790302/230723028-78be4e59-17fb-4517-9a9b-950461db623c.png) 如这里2w多而您的是16000多 截图里我用的是他提供的weight。而我自己训练的模型,结果跟你差不多,ap@k低3%-4%左右。不知道为什么。如果你解决了欢迎跟我交流。

you need to add a checkpoint file behind the config file. like this: `python demo/demo_vid.py \ configs/vid/selsa/selsa_faster_rcnn_r50_dc5_1x_imagenetvid.py \ --checkpoint selsa_faster_rcnn_r50_dc5_1x_imagenetvid_20201227_204835-2f5a4952.pth \ --input demo/demo.mp4 \ --output vid.mp4`

> 感谢,所以说,这个模型只是在后处理的时候检测出未知类吗?是不是这么理解

感谢提供中文解读,借楼顺便问一下,这个库怎么做评估?只提供了training的指令,评估模型的指令是什么?