DynSLAM
DynSLAM copied to clipboard
Wrap DynSLAM as a ROS node
Have you considered make this a ROS node so that depth, and segment classification can be integrated via ROS messages. This would allow switching between different implementation of these components, and would be language agnostic.
Yes, I have. You are correct; those would be very neat advantages, but it would take a bit of time to refactor everything. It's on my todo list, though. And once again, very sorry about the slow replies, it's a busy period here and I'm also CUDA-less. :(
Note to self: Peidong suggested ROS2 should be used (http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29).
Yes, I have. You are correct; those would be very neat advantages, but it would take a bit of time to refactor everything. It's on my todo list, though. And once again, very sorry about the slow replies, it's a busy period here and I'm also CUDA-less. :(
Have you already make this a ROS node?
Sorry, I haven't added ROS support and I don't think I'll have time in the foreseeable future. I'd be more than happy to review PRs adding that if someone does it!
Do you know how to make a KITTI data with a rosbag which has rgb and depth message?
发自我的iPhone
------------------ Original ------------------ From: Andrei Bârsan <[email protected]> Date: Thu,Apr 30,2020 9:33 PM To: AndreiBarsan/DynSLAM <[email protected]> Cc: 0dmf0 <[email protected]>, Comment <[email protected]> Subject: Re: [AndreiBarsan/DynSLAM] Wrap DynSLAM as a ROS node (#38)
Sorry, I haven't added ROS support and I don't think I'll have time in the foreseeable future. I'd be more than happy to review PRs adding that if someone does it!
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
ETH Zurich seem to have a utility for building rosbags from KITTI sequences here: https://github.com/ethz-asl/kitti_to_rosbag
However, since KITTI doesn't come with depth (apart from the LiDAR), you'd need to compute that separately, and modify the code above to include one additional depth or disparity image in each frame.
If you want to, you can use the DispNet also used by this project, or if you want to upgrade to newer tech (DispNet is almost 5 years old), I'd recommend one of my colleague's awesome DeepPruner: https://github.com/uber-research/DeepPruner (MIT licensed)
ETH Zurich seem to have a utility for building rosbags from KITTI sequences here: https://github.com/ethz-asl/kitti_to_rosbag
However, since KITTI doesn't come with depth (apart from the LiDAR), you'd need to compute that separately, and modify the code above to include one additional depth or disparity image in each frame.
If you want to, you can use the DispNet also used by this project, or if you want to upgrade to newer tech (DispNet is almost 5 years old), I'd recommend one of my colleague's awesome DeepPruner: https://github.com/uber-research/DeepPruner (MIT licensed)
Well, so can you explain clearly how to use another KITTI dataset? I don't know how to process another dataset.
Or can you provide another KITTI dataset which has already been processed? Thanks
Do you mean pre-processed for DynSLAM or pre-processed for ROS?
For DynSLAM there are tools in this repo (MNC and dispnet-flownet-docker in preprocessing/
) do do it for KITTI data. For ROS I am not sure; I am not familiar with ROSbags.
Do you mean pre-processed for DynSLAM or pre-processed for ROS? For DynSLAM there are tools in this repo (MNC and dispnet-flownet-docker in
preprocessing/
) do do it for KITTI data. For ROS I am not sure; I am not familiar with ROSbags.
Everytime I pre-process KITTI data, it shows:
dmf@dmf-GS63-7RE:/media/dmf/丁/DynSLAM$ scripts/preprocess-sequence.sh kitti-tracking data/kitti/tracking training 6Failure: Invalid directory structure for the tracking dataset.
Expected directory structure:
├── training
│ ├── calib
│ ├── image_02
│ │ ├── 0000
│ │ └── etc.
│ ├── image_03
│ │ ├── 0000
│ │ └── etc.
│ └── velodyne
│ ├── 0000
│ └── etc.
└── testing
└──
What does your dir structure look like?
On Sat, May 2, 2020 at 10:53 AM 0dmf0 [email protected] wrote:
Do you mean pre-processed for DynSLAM or pre-processed for ROS? For DynSLAM there are tools in this repo (MNC and dispnet-flownet-docker in preprocessing/) do do it for KITTI data. For ROS I am not sure; I am not familiar with ROSbags.
Everytime I pre-process KITTI data, it shows: dmf@dmf-GS63-7RE:/media/dmf/丁/DynSLAM$ scripts/preprocess-sequence.sh kitti-tracking data/kitti/tracking training 6Failure: Invalid directory structure for the tracking dataset. Expected directory structure: ├── training │ ├── calib │ ├── image_02 │ │ ├── 0000 │ │ └── etc. │ ├── image_03 │ │ ├── 0000 │ │ └── etc. │ └── velodyne │ ├── 0000 │ └── etc. └── testing └── . Do you know how to solve this?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/AndreiBarsan/DynSLAM/issues/38#issuecomment-622966030, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJT2YWGJ2WOK6SFH4WLDOLRPQXXFANCNFSM4D64R2OQ .
Seems like the folder is called training 1
instead of training
. Maybe
that's the issue?
On Sat, May 2, 2020 at 11:06 AM 0dmf0 [email protected] wrote:
[image: 图片] https://user-images.githubusercontent.com/60035642/80867912-7d7ed280-8cc9-11ea-8d48-ed0e79373367.png is it wrong?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/AndreiBarsan/DynSLAM/issues/38#issuecomment-622967873, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJT2YTSSQHWDAB7NMA3UKLRPQZG7ANCNFSM4D64R2OQ .