mega.pytorch icon indicating copy to clipboard operation
mega.pytorch copied to clipboard

Memory Enhanced Global-Local Aggregation for Video Object Detection, CVPR2020

Results 65 mega.pytorch issues
Sort by recently updated
recently updated
newest added

Hello, I want to train the network only with VID instead of DET. When I changed DATASETS_TRAIN in base_rcnn_1gpu. yaml and vid_R_101_C4_mega_1x. yaml , the following error occurred. How to...

/home/ming/anaconda3/lib/python3.8/subprocess.py:844: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used self.stdout = io.open(c2pread, 'rb', bufsize) Traceback (most recent call last): File "demo_1.py", line...

Is the output of FlowNet in dff still optical flow And if so, how do you visualize it

VID initial release. 22GB. MD5: b329300dd0cd4422171878970d30e1da VID initial release snippets. 15GB. MD5: 4e8f46f7d507edec5a42e1c25de664c3 There are a total of 1952 snippets for training. The number of snippets for each synset ranges...

thanks for your work. when reading the test_net.py code, I do not find any code about BLR post process. Hasn't this part been released yet。 or just I miss somewhere...

I have encountered a problem when using my own data set for training and evaluation. What is the specific content in the file vid_groundtruth_motion_iou.mat? When I use my own data...

https://github.com/Scalsol/mega.pytorch/blob/a6aa6e0537b82d70da94228100a51e6a53d98f82/mega_core/data/datasets/vid_mega.py#L133 `images["cur"] = img`, However, this `img` is not corresponding to the current `idx` when `cfg.MODEL.VID.MEGA.GLOBAL.ENABLE == True`. The for loop in line116 will replace `img` at each iteration https://github.com/Scalsol/mega.pytorch/blob/a6aa6e0537b82d70da94228100a51e6a53d98f82/mega_core/data/datasets/vid_mega.py#L116...

How to aggregate the results of local aggregation and LGlobal aggregation? What is the operation aggregation? Which line of code implements the aggregation of the results of local aggregation and...