Xinliang Zhong
Xinliang Zhong
箱子一次只能选一个点啊 你要么动雷达和相机固连的车体 要么动箱体 而且每次一个位置在直线上应该只能选一个点 图像上对应的应该是你自己做的标记 而雷达应该是突出的那个点 具体你可以看V1非ROS版本 有详细的标志物说明 以及是怎么特征点检测的 如果我这个版本实在不能用的话 你还可以参考旷视科技开源的标定方法 贺一家师兄做的。另外工程里边我可能上传了怎么选取点和怎么标定的演示视频 https://github.com/TurtleZhong/camera_lidar_calibration_v2/blob/master/how_to_use.mp4 祝好运。另外初值rt都要给 尽量给个准点的 因为我这个方法用的是数值求导 并没有自己去算雅克比矩阵 ---原始邮件--- 发件人: "Mrggggg" 发送时间: 2019年4月30日(星期二) 晚上11:29 收件人: "TurtleZhong/camera_lidar_calibration_v2"; 抄送: "Comment";"Xinliang Zhong"; 主题: Re:...
> Hello! I was wondering if your calibration solution can work with any kind of lidar sensor? I'm planning on getting a HLS-LFOM1 sensor from hitachi. But in the end...
Hi,sorry to reply 1.In the experiment, we collected 30 pairs of points. 2.If you can find the corresponding points between radar and camera,it is ok to solve the problem. >...
Hi @bernatx can you help me with this problem in Carla multi-gpu feature. [Detail post is here](https://github.com/carla-simulator/carla/issues/6543)
@Dmitry-Filippov-Rival I think you can constructed a colmap format using the known poses following this tutorial [Reconstruct sparse/dense model from known camera poses](https://colmap.github.io/faq.html#reconstruct-sparse-dense-model-from-known-camera-poses). also you can combine point clouds from(lidar...
@jaco001 despite of the green house, I found the render results of vehicles in two views are really different. In the right-view, may be more views of the vehicle, so...
any updates? Really nice work! > We will release the code upon completion of the company's review process.
@TBetterman Sorry for late reply, We got 3 element when run inference, code is (https://github.com/TurtleZhong/hfnet_ros/blob/master/include/hfnet_ros/tensorflow_net.h#L79), if you use hfnet_ros local descriptors (usually same with SIFT etc.) you can get matching...
@GeLink9999 you can check this one https://github.com/TurtleZhong/AVP-SLAM-SIM, We only provide the sim environment
really nice work, any updates?