mmtracking icon indicating copy to clipboard operation
mmtracking copied to clipboard

OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.

Results 222 mmtracking issues
Sort by recently updated
recently updated
newest added

When I use “torchsummary.summary(model)” in test.py. It give me TypeError:forward() missing 1 required positional argument: 'img_metas'. How can I solve this question? Thanks a lot!

In "/mmtracking/mmtrack/models/motion/flownet_simple.py," the init parameters "flow_img_norm_std=[255.0, 255.0, 255.0]" and "flow_img_norm_mean=[0.411, 0.432, 0.450]" . What's the meaning of these parameters? I'm using a type of data with 10 channels, how should...

For VID methods like FGFA and SELSA, the author show the results of mAP(slow), mAP(medium) and mAP(fast) in the paper. When testing with mmtracking, I found here are results of...

Hi, I am facing the same error https://github.com/open-mmlab/mmtracking/issues/358 And, I have tried the above given solutions but still no luck. bash ./tools/dist_train.sh /configs/vid/temporal_roi_align/selsa_troialign_faster_rcnn_r50_dc5_7e_imagenetvid.py 2 Traceback (most recent call last): File...

When testing FGFA or SELSA method with ILSVRC dataset, I can't find the `ann_file=data_root + 'annotations/imagenet_vid_val.json'` So where can I get the ann_file for training and testing?