mmtracking
mmtracking copied to clipboard
OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.
Hello, I want to make a model that can track multiple drones in a video using only drone data from LaSOT dataset. I'd appreciate it if you let me know...
reid
Why your retrain Reid module base on MOT17 data have a very high accuracy(mAP > 80) from your reid_log(https://download.openmmlab.com/mmtracking/fp16/reid_r50_fp16_8x32_6e_mot17_20210731_033055.log.json) but we reproduce it ,the accuracy always ~60 (mAP)
we use fgfa as VID method and retina-net as detector and change the "num_classes" in config file of retina_net to 3 which is adapted to our dataset. It can be...
请问怎么评估跟踪器呢?有评估跟踪器指标的代码嘛?
在进行iou匹配的时候,用的是当前帧的检测框和上一帧的检测框。 更新轨迹的时候,也只是得到了新的mean和corvariance。 2D框最终采用的也是检测的结果。 综上,用了卡尔曼和不用卡尔曼的结果好像一样:因为卡尔曼只不过给每个轨迹不停的更新mean 和 covariance,但是更新的mean和covariance并没有实际的用处。
Why retrain reid config alway get very low accuracy(mAP:50), From your log & pretrain model, it can achieve about 96% mAP.
I want to fine-tune siameserpn++ on my own dataset, but my dataset format modified and annotated according to CocoVID's format cannot be used directly. For example, an error is reported...
We keep this issue open to collect feature requests from users and hear your voice. You can either: 1. Suggest a new feature by leaving a comment. 2. Vote for...
model config: vis/masktrack_rcnn/masktrack_rcnn_x101_fpn_12e_youtubevis2021.py model input: input video folder errors describtion: Using inference_mot inference the result, ids = result['track_bboxes'][0][:,0].tolist() print(ids) ########## [1, 10, 2, 6, 3, 14, 32, 13, 17, 7,...