nuscenes-devkit icon indicating copy to clipboard operation
nuscenes-devkit copied to clipboard

Are "BETA_PLUS_PLUS LiDARS" and "BETA_PLUS_PLUS Cameras" sensor used in the dataset?

Open Ckerrr opened this issue 5 years ago • 3 comments

It's mentioned that there are two kinds of LiDAR and camera sensor are used in the dataset (https://level5.lyft.com/dataset/).

However, there seems no way to identify the difference in the dataset available.

  1. Are they actually included in the dataset?
  2. If "yes" for Q1, how to identify the sensor source of a specific frame?

Thanks!

Ckerrr avatar Dec 05 '19 01:12 Ckerrr

BETA_V0 LiDARS: • One 40-beam roof LiDAR and two 40-beam bumper LiDARs. • Each LiDAR has an azimuth resolution of 0.2 degrees. • All three LiDARs jointly produce ~216,000 points at 10 Hz. • The firing directions of all LiDARs are synchronized to be the same at any given time. BETA_V0 Cameras: • Six wide-field-of-view (WFOV) cameras uniformly cover 360 degrees field of view (FOV). Each camera has a resolution of 1224x1024 and a FOV of 70°x60°. • One long-focal-length camera is mounted slightly pointing up primarily for detecting traffic lights. The camera has a resolution of 2048x864 and a FOV of 35°x15°. • Every camera is synchronized with the LiDAR such that the LiDAR beam is at the center of the camera's field of view when the camera is capturing an image.

BETA_PLUS_PLUS LiDARS: • The only difference in LiDARs between Beta-V0 and Beta++ is the roof LiDAR, which is 64-beam for Beta++. • The synchronization of the LiDARs is the same as Beta-V0. BETA_PLUS_PLUS Cameras: • Six wide-field-of-view (WFOV) high dynamic range cameras uniformly cover 360 degrees field of view (FOV). Each camera has a resolution of 1920x1080 and a FOV of 82°x52°. • One long-focal-length camera is mounted slightly pointing up primarily for detecting traffic lights. The camera has a resolution of 1920x1080 and a FOV of 27°x17°. • Every camera is synchronized with the LiDAR such that the LiDAR beam is at the center of the camera's field of view when the camera is capturing an image.

gledsonmelotti avatar Jul 02 '20 12:07 gledsonmelotti

I assume that both sensor configurations (BETA++ and BETA_v0) were used in the dataset. I'm interested in the roof lidar which was changed from 40-beam to 64-beam. I have two questions:

  1. What is the data split between Beta++ and Beta_v0?
  2. For the 64-beam roof lidar, what lidar model are they using? (pandar64, velodyne, etc.)

patrickeala avatar Mar 21 '24 10:03 patrickeala

I assume that both sensor configurations (BETA++ and BETA_v0) were used in the dataset. I'm interested in the roof lidar which was changed from 40-beam to 64-beam. I have two questions:

  1. What is the data split between Beta++ and Beta_v0?
  2. For the 64-beam roof lidar, what lidar model are they using? (pandar64, velodyne, etc.)

I'm sorry. I don't know.

gledsonmelotti avatar Mar 23 '24 00:03 gledsonmelotti