OpenPCDet
OpenPCDet copied to clipboard
add single frame ros inference script
add single frame ros inference script (currently only test on PVRCNN model)
depends: rospy ros_numpy pyquaternion and etc...
Thank you for the contribution.
I am not sure what's the benefit of the ros inference.
Also, there are many custom paths from your personal server. And also the version.py
should not be included to this repo since this file may change along with the git log of each compilation.
it 's kind of visualization tool to test those Lidar-based 3D Detection Method. And the interrsting thing is the baseline model trained on Nuscenes Dataset realtime inference results are worse than trained on KITTI datatset, it's more unstable and has many FP results. it's not make sense when those model have a high evaluation scores(NDS), and the real performance are not so good.
the version.py problem will be fixed. @sshaoshuai
Hi,
Perfectly running inference.py with PointPillars trained on Kitti but failed when using PointPillars_MultiHead (trying to inference 360º point cloud).
[ERROR] [1596543042.259403, 117.434018]: bad callback: <function rslidar_callback at 0x7f24f23dd378> Traceback (most recent call last): File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback cb(msg) File "inference.py", line 220, in rslidar_callback scores, dt_box_lidar, types = proc_1.run(np_p) File "inference.py", line 161, in run pred_dicts, _ = self.net.forward(data_dict) File "/root/OpenPCDet/pcdet/models/detectors/pointpillar.py", line 11, in forward batch_dict = cur_module(batch_dict) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/root/OpenPCDet/pcdet/models/backbones_3d/vfe/pillar_vfe.py", line 120, in forward features = pfn(features) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/root/OpenPCDet/pcdet/models/backbones_3d/vfe/pillar_vfe.py", line 37, in forward x = self.linear(inputs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/linear.py", line 92, in forward return F.linear(input, self.weight, self.bias) File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1408, in linear output = input.matmul(weight.t()) RuntimeError: size mismatch, m1: [247920 x 10], m2: [11 x 64] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:268
Please can you solve this problem or give hints to solve it?
Thanks
please update the newest pull request, and notice the model trained on KITTI and Nuscens has different input. KITTI: [ x, y, z, intensity] Nuscenes: [x, y, z, intensity, timestamp] @JavierEgido
Thanks. Updating to latest commit and adding a fake timestamp column to point cloud worked.
@muzi2045 Thanks, nice work!
Thank you for this @muzi2045
Why isn't this merged in?
Hi, I'm a student and I have a rosbag file. I'm looking for a method to publish the detection results based on a lidar PointCloud2 topic and visualize them in rviz (I want a video-like visualization). However, I don't really understand how this script works. For me it seems the model makes predictions based on a given .npy or .bin file but the script also subscribes to a pointcloud topic and publishes the bounding boxes. So does the model detect based on the pointcloud the script subscribes to or a given .npy or .bin file? Thank you for the answer.
play your rosbag, and change the subscribe topic in script.
sub_ = rospy.Subscriber(the_topic, PointCloud2, rslidar_callback, queue_size=1, buff_size=2**24)
notice the output jsk_boundingboxs frame_id should same as input pointcloud frame_id.
Okay, thank you. So in this case the detection results (bounding boxes) are based on my topics cloud and not the given .npy or .bin file?
@tsbhun98 the given .npy or .bin is used to build the network and the first time forward, that's all. use your own bag is ok, but your pointcloud should be the same as the model's training data.
@muzi2045 I'm working on implementing this branch, but having some difficulty with dependencies (right now the latest version of CUDA breaks numba). Can you share what the specifics of your environment are (CUDA version, Linux distro/version, and anything else that might be helpful)?
CUDA 10.0 numba == 0.46.0 torch == 1.4.0+cu100 @Spartacus782
Hello, I am a novice to ros. I want to know how to use this code? I need to compile the CMakeList.txt and package.xml files, and then use rosrun to run this file after Cmake compilation?
@QYWT Hi, I'm relatively new to ROS too but managed to make this code work a couple weeks ago. You need to create a catkin workspace and a package inside it. In the workspace you need to use the catkin build (or the older catkin_make) command to complie your code. Then you can run the node with rosrun. If you are not familiar with these I suggest you visit this site and go through the beginner level tutorials: http://wiki.ros.org/ROS/Tutorials Otherwise, if you have all the required dependencies installed and changed the paths in your code you should be fine.
@tsbalazs According to your suggestion, I have successfully run the code, but I cannot subscribe to the topic /pp_boxes in rviz. Have you encountered this problem?
@QYWT For some reason if you want to add it by topic it is unvisualizable. In the displays section click add and choose the "By display type" option (it is the default option). Choose BoundingBoxArray and it will be listed in the displays section. Double click on it and change the Topic to pp_boxes.
@tsbalazs Thank you very much, I can see the results successfully
@muzi2045 excuse me , how can I use the model in ROS with my own VLP-16 LiDAR, I have tried the inference.py with pretrain pth inPCDet but the boundingbox topic /pp_boxes in rviz got bad result.
@muzi2045 excuse me , how can I use the model in ROS with my own VLP-16 LiDAR, I have tried the inference.py with pretrain pth inPCDet but the boundingbox topic /pp_boxes in rviz got bad result.
I'm sorry about this. The models in PCDet mostly use 64 lines of data, and their performance on 16 lines will be very poor.
@QYWT , thak you for your reply, do you konw the poor result is because of the pretrain model is trained from 64 lines data or the model is not suitable for 16 lines data despite I train by my own dataset?
@muzi2045 excuse me , how can I use the model in ROS with my own VLP-16 LiDAR, I have tried the inference.py with pretrain pth inPCDet but the boundingbox topic /pp_boxes in rviz got bad result.
I'm sorry about this. The models in PCDet mostly use 64 lines of data, and their performance on 16 lines will be very poor.
@QYWT the result is like the picture, the used model and checkpoint is the same.
my VLP 16 data
KITTI dataset data
@muzi2045 thanks for your works mate! I would like to know if the inference script supports pointpillars, or not?
yeah, it support pointpillars.
@muzi2045 Thank you! This is very needed indeed!
play your rosbag, and change the subscribe topic in script.
sub_ = rospy.Subscriber(the_topic, PointCloud2, rslidar_callback, queue_size=1, buff_size=2**24)
notice the output jsk_boundingboxs frame_id should same as input pointcloud frame_id.
Thank you very much for your help. I successfully did target detection on rviz, but the total callback time was about 0.25 seconds, which made it impossible to reach the real time. Can I reduce the calculation time by modifying the ROI area when inputting the neural network? (I should operate in which python file) Or do you have better suggestions? thank you very much!
print the cost time when you run the python script and find which part cost the most.
Hello, I use rosbag for visualization and found that the color of the boundingbox is random? Can I set a fixed color? @muzi2045
try to use fixed color box with ROS markarray.