mmtracking
mmtracking copied to clipboard
OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.
Thanks for your error report and we appreciate it a lot. **Checklist** 1. I have searched related issues but cannot get the expected help. 2. The bug has not been...
Support `Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking` ([official code](https://github.com/noahcao/OC_SORT), [arxiv](https://arxiv.org/abs/2203.14360)). The implementation has been verified on the MOT17-val set. [WIP]: 1. Polish the readme file at `configs/mot/ocsort/README.md` 2....
 How to solve this error
**Describe the bug** When I use **ATSS** instead of **YOLOX** as detector of **ByteTrack**, an error occured `TypeError: Caught TypeError in DataLoader worker process 0.` `TypeError: list indices must be...
Is it okay..? ``` [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 29528/29528, 5.0 task/s, elapsed: 5959s, ETA: 0sEvaluate CLEAR MOT results. Traceback (most recent call last): File "tools/test.py", line 225, in main() File "tools/test.py", line 215,...
请问有OC-SORT的复现吗?暂时没有看到
我发现运行demo的时候,会从网上加载预训练模型。但是我已经指定了checkpoint了,那么如何避免从网上load checkpoint? 我的命令如下:python demo/demo_mot_vis.py configs/mot/deepsort/sort_faster-rcnn_fpn_4e_mot17-private.py --checkpoint checkpoints/faster-rcnn_r50_fpn_4e_mot17.pth --input demo/demo.mp4 --output mot.mp4。checkpoints/faster-rcnn_r50_fpn_4e_mot17.pth为从网上下载的checkpoint,保存到系统的cache文件夹下。 后面我发现是要修改config,当我把config中的checkpoint路径改成checkpoints/faster-rcnn_r50_fpn_4e_mot17.pth时。出现以下信息:如何处理呢? 
## Motivation `torch.meshgrid()` has started raising a warning when not called with an explicit indexing parameter: https://pytorch.org/docs/stable/generated/torch.meshgrid.html ## Modification Provide `indexing='ij'` to all calls to `torch.meshgrid` that don't already specify...
RT, Hope framework can support VIS (mask2former) ASAP which is new SOTA in VIS direction.
Hello! I am trying to run `demo/demo_mot_vis.py` with my own bounding boxes from [MTA dataset](https://github.com/schuar-iosb/mta-dataset). Authors of MTA dataset provide script to convert it into COCO format. I run this...