slamantic icon indicating copy to clipboard operation
slamantic copied to clipboard

Unclear how to run the semantic mask part along with orb_slam(e.g. mono_kitti node)

Open mshong0320 opened this issue 5 years ago • 12 comments

Been trying to figure out how to get the semantic part to run along with running the orb_slam2. The only thing that's been working for me now is running a ros-node by ("rosrun ORB_SLAM2 mono_kitti Vocabulary/ORBvoc.txt Examples/Monocular/KITTI03.yaml") or running ("./Examples/Monocular/mono_kitti Vocabulary/ORBvoc.txt Examples/Monocular/KITTI0X.yaml PATH_TO_SEQUENCE").

Can someone provide some more detailed guidance to running the semantic mask part along with the orb slam2 to get the result as shown on the sample video included in this repo?

Thank you in advance!

mshong0320 avatar Nov 07 '19 16:11 mshong0320

hi, we implemented our method only with the stereo nodes. please precompute the labels and put them into KITTI $kitti_dataset_dir/$sequence/dla/labelIds. (e.g. with https://github.com/ucbdrive/dla, the id/color to semantic label is defined in 'slamantic/labels/labels-cityscapes.yaml '). unfortunatley we are currently not able to share the labeling code itself. then use e.g. KITTI03-df.yaml for the same configuration as shown in the video.

mthz avatar Nov 08 '19 12:11 mthz

Thank you for the reply. After precomputing the labels do I have to run both the DynaSLAM and the OrbSLAM together at the same time? I'm still confused about how to use the output from DynaSLAM as an input to the OrbSLAM2.

mshong0320 avatar Nov 09 '19 00:11 mshong0320

Hi!

DynaSLAM is a complete different method. We just used it to compare or method to the state of the art. You only need to run the stereo node examples with precomputed semantic segmentation images.

Best, Martin

On Sat 9 Nov 2019 at 01:58, Minsung Chris Hong [email protected] wrote:

Thank you for the reply. After precomputing the labels do I have to run both the DynaSLAM and the OrbSLAM together at the same time? I'm still confused about how to use the output from DynaSLAM as an input to the OrbSLAM2.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/mthz/slamantic/issues/1?email_source=notifications&email_token=ABCNPODP5WR2XBONYR56U73QSYDJTA5CNFSM4JKJWPL2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDTZHGQ#issuecomment-552047514, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABCNPOGLWHGSC7LWDLKEM6TQSYDJTANCNFSM4JKJWPLQ .

humenbergerm avatar Nov 09 '19 08:11 humenbergerm

Hi,

Still have some issues. Right now I'm stuck on how to use the https://github.com/ucbdrive/dla to precompute the labels. Can you give me some guidance on that? I looked through the dla repo but it isn't clear where in the files precomputation of labels is happening. Also I downloaded all 6 vkitti files and am trying to run vkitti_create_extrinsic.py choosing the --data directory as ./vkitti_1.3.1_extrinsicsgt/. I chose that directory as none of the other ones contains any extrinsic files. None of the vkitti dataset folders that I downloaded from Naverlabs Europe has extrinsic.txt files in them. Am I missing something here? Also on line 92 of vkitti_create_extrinsic.py I had to change the scenes = ["Scene01", "Scene02", "Scene18", "Scene06", "Scene20"] -> scenes = ["0001", "0002", "0018", "0006", "0020"] with the dataset I downloaded. I'm guessing I'm missing the right dataset. If so how can I get the dataset that was used?

mshong0320 avatar Nov 14 '19 16:11 mshong0320

Hi!

I am sorry, the vkitti2 dataset (which is needed for slamantic) is not released yet. I will add this to the readme file. It will be released soon, in the meanwhile, please use kitti.

Best, Martin

humenbergerm avatar Nov 14 '19 16:11 humenbergerm

Hi Martin, Thank you for your reply. Do you know when will that vkitti2 dataset be released? Could you answer my question about using dla to generate the kitti/sequence/dla/labelIDs/, kitti/sequence/dla/labelProbabilities/ and the semantic labels for kitti?

Thank you again.

Best regards,

Chris

mshong0320 avatar Nov 14 '19 17:11 mshong0320

Hi!

I guess beginning of December 2019. Sorry, I did not run dla myself, I let mthz answer that.

Best, Martin

humenbergerm avatar Nov 14 '19 18:11 humenbergerm

Thank you for the reply Martin!

mshong0320 avatar Nov 14 '19 18:11 mshong0320

Hi, I'd also like to ask if you guys thought about using bounding box detection algorithms like Detectron or Darknets to identify the features of dynamic objects instead of using semantic labeling? Which method would be a better way to go about it and why?

mshong0320 avatar Nov 15 '19 02:11 mshong0320

Hi, unfortunately i cannot share the model and semantic generation we used with the DLA algorithm. but you can use any semantic labeling implementation. The input is the semantic image where I(x,y)=semantic class ID. You can use as well a colored semantic image where I(x,y)=(r,g,b)

yes you could as well use bounding boxes to gather the semantic information. But you have to take into account that 3D-Points could be within the bounding box but not on the object itself. Furthermore you have to manage overlapping boxes somehow. The semantic labeling allows to use easily background labels such as road, sidewalk, building. We directly started with a semantic labeling model and thus didn't do experiments with bounding box detectors.

Best, Matthias

mthz avatar Nov 15 '19 14:11 mthz

Hi Matthias,

Thank you very much for the detailed reply.

Best regards,

Chris

mshong0320 avatar Nov 15 '19 16:11 mshong0320

Hi!

VKITTI2 is released now. I added the link to the readme file.

Best, Martin

humenbergerm avatar Jan 30 '20 08:01 humenbergerm