Kuang Haofei

Results 8 comments of Kuang Haofei

Sorry! I am trying to recall the details about the UCF101 pretraining. Because the experiment of UCF101 pretraining is used for a fair comparison with other methods for video retrieval,...

OK, it should be the same as us. Did you prepare the UCF101 dataset as shown in the [PREPARE_DATA.md](https://github.com/amazon-research/video-contrastive-learning/blob/main/PREPARE_DATA.md)? If you follow the tutorials to get the dataset and use...

Hi, thanks for your question! Regarding the map visualization, I use a very simple way to achieve it. I just extract each occupied point and shift it to a global...

Hi, Thanks for your question! Of course, you can also use gmapping or cartographer to get the map. But since our dataset already has ground-truth poses, we directly run the...

Hello, thanks for your question! The raw-log file contains all unprocessed sensor data, including odometry readings that may be noisy. On the other hand, the corrected-log file is generated by...

Hi, Thanks for your question! I provide a toolbox for data preprocessing, including data format conversions. You can check the [INSTRUCTIONS.md](https://github.com/PRBonn/ir-mcl/blob/dev/tools/INSTRUCTIONS.md) in the `dev` branch now. Because the toolbox is...

Thanks for your question! To convert rosbag to json file, we assume the 2D LiDAR data ([sensor_msgs/LaserScan.msg](http://docs.ros.org/en/melodic/api/sensor_msgs/html/msg/LaserScan.html)) and pose data ([nav_msgs/Odometry.msg](http://docs.ros.org/en/noetic/api/nav_msgs/html/msg/Odometry.html)) have been included in your bag file. Regarding the...

Hi, I provide a toolbox for data preprocessing, including data format conversions. You can check the [INSTRUCTIONS.md](https://github.com/PRBonn/ir-mcl/blob/dev/tools/INSTRUCTIONS.md) in the `dev` branch now. Because the toolbox is not finished yet, we...