carla icon indicating copy to clipboard operation
carla copied to clipboard

sensor.lidar.ray_cast_semantic and sensor.lidar.ray_cast collect different lengths

Open miyangy opened this issue 1 year ago • 4 comments
trafficstars

The settings of lidar.ray_cast_semantic and lidar.ray_cast are the same: ('channels', '32'), ('points_per_second', '300000'), ('range', '100'), ('rotation_frequency', str(20)) lidar.ray_cast_semantic raw_data:80496->(13416,6) lidar.ray_cast raw_data:29584->(7396,4) Why are the lengths different? Thanks!!!

miyangy avatar Apr 16 '24 13:04 miyangy

Hi @miyangy,

LIDAR and Semantic LIDAR are two different sensors and there are some differences as explained in the unordered list here: https://carla.readthedocs.io/en/latest/ref_sensors/#semantic-lidar-sensor.

If you look in the documentation, the raw_data returned by the Semantic LIDAR returns more values.

raw_data documentation of the lidar.ray_cast: "Received list of 4D points. Each point consists of [x,y,z] coordinates plus the intensity computed for that point." https://carla.readthedocs.io/en/latest/python_api/#carla.LidarMeasurement

raw_data documentation of the lidar.ray_cast_semantic: "Received list of raw detection points. Each point consists of [x,y,z] coordinates plus the cosine of the incident angle, the index of the hit actor, and its semantic tag." https://carla.readthedocs.io/en/latest/python_api/#carla.SemanticLidarMeasurement

For this reason the length of the data is different.

JoseM98 avatar Apr 19 '24 10:04 JoseM98

Hi @JoseM98 I understand. Thank you!!! I have another question: If I want to add semantic tags to the raw data collected by lidar, do I have to match them one by one with the raw data collected by semantic lidar?

miyangy avatar Apr 22 '24 07:04 miyangy

You need to make sure you have the same frame for both sensors. The different of 80496->(13416,6) 29584->(7396,4) looks a bit too much

https://carla.readthedocs.io/en/0.9.15/ref_sensors/#lidar-sensor xyzi = 4*32-bits floats

https://carla.readthedocs.io/en/0.9.15/ref_sensors/#semantic-lidar-sensor xyz+angle+index+tag = 3*32-bits floats + 32-bits float + 32-bit integer + 32-bit integer

With synchronous_mode you can step the simulation yourself to get the data from the same time step. If you don't hit anything, it will not give you a hit back.

For example: 100 points hit the floor 100 points hit the air (also nothing) You get 100 points back

I noticed that even with the same time step, you could still have a difference of 1-2 points (I believe even more if you go into the millions of points each step).

If you want to add the 2 sensor data together, you have to search for the right xyz value. In the normal case (if both have the same amount of points) you could add the data together. The points should have the same sorting.

To be safe, I would get the xyz value from sensor-data-1 (lidar), search for the xyz value of sensor-data-2 (semantic lidar), and create a new point in a new array (self created sensor-data-3?)

For example: xyzi = (1.0, 1.0, 0.0, 1.0) xyzait (xyz angle index tag) = (1.0, 1.0, 0.0, 0.5, 2, 20) self_created_point = (xyzi[0], xyzi[1], xyzi[2], xyzi[3], xyzait[3], xyzait[4], xyzait[5])

As note A colleague of mine (1-2 others tried the same) tried to work with the intensity of the lidar point. The intensity value doesn't look right or doesn't change. I would test for changes before you use the intensity value.

PatrickPromitzer avatar Apr 24 '24 15:04 PatrickPromitzer

Hi @miyangy,

The difference between Lidar and SemanticLidar is that the former simulates external perturbations for better realism. Therefore, Lidar has fewer points than SemanticLidar. These perturbations can be disabled so that the points of the two sensors are the same using synchronous mode but you would lose realism in the Lidar.

I think that the solution proposed by PatrickPromitzer is fine:

To be safe, I would get the xyz value from sensor-data-1 (lidar), search for the xyz value of sensor-data-2 (semantic lidar), and create a new point in a new array (self created sensor-data-3?)

The other option would be to create a new sensor in code that is very similar to the Lidar with the data you need. But this solution would be more complex.

JoseM98 avatar Apr 26 '24 12:04 JoseM98

I am going to close the issue. Feel free to reopen it if you need more help.

JoseM98 avatar May 03 '24 10:05 JoseM98