Zengyi Qin

Results 125 comments of Zengyi Qin

Thank you for your interest! R is not explicitly implemented. I try to give you a clear understanding of the coordinate systems in the followings: ![diff](https://user-images.githubusercontent.com/40556743/75113207-c4779880-5686-11ea-831a-d19d154c1096.PNG) Using a monocular image,...

Almost correct. Their central location `pred_locations` is in camera coordinates. But their rotation has not been rotated back to camera coordinates. If you are only hoping to visualize the results,...

Thank you for your interest! If you'r reproducing MonoGRNet based on a two-stage detector, there're some suggestions: - Please don't modify the RPN (first) stage, which is just for 2D...

To the comment above: thanks for your interest! I just wrote a [new reply](https://github.com/Zengyi-Qin/MonoGRNet/issues/48) giving a detailed explaination about this.

Thanks for your interest. Currently my PC does not support Chinese input, so let me reply in English. The pipeline reads the paths of validation images from `data/KittiBox/val.txt`, where each...

https://github.com/Zengyi-Qin/MonoGRNet/blob/97b6f9308e24d010713fb45e4e5ca57adf7e409c/evals/kitti_eval.py#L134 This line loads validation images

That would not affect the final results. You can check the detection results in `outputs/kittiBox/val_out/xxxxx.txt`, which should correspond to kitti format

The pretrained 2D detector could not be directly used for your new dataset since the RGB cameras might have different color distribution and noise. - In your case (and for...

- 在报错位置的代码里,会默认加载3D部分的预训练模型。如果不用它,可以把那部分加载模型的代码注释掉。代码不会报错。如果重新训练,请参照https://github.com/Zengyi-Qin/MonoGRNet/issues/1#issuecomment-474408826 的回答 - 补充材料已发邮箱

Hi, thanks for your interest. I didn't encountered this problem before. But I would suggest you to re-create a clean Python environment with Python2.7 and follow the setup guidance of...