RGBD_Semantic_Segmentation_PyTorch
RGBD_Semantic_Segmentation_PyTorch copied to clipboard
The requirement of cityscapes RGBD dataset
First thank you for sharing the excellent work for us. And for the RGBD segmentation research, we really need the cityscapes RGB-D dataset or the method to get depth map, would you share that in recent times?
Thanks for your attention. You could get the depth map from the official disparity map (in '.png' file) and camera parameters (in '.json' file). Here we show some example code:
disp = cv2.imread(disp_file, cv2.IMREAD_UNCHANGED) # read the 16-bit disparity png file
disp = np.array(disp).astype(np.float)
disp[disp > 0] = (disp[disp > 0] - 1) / 256 # convert the png file to real disparity values, according to the official documentation.
# you could also refer to https://github.com/mcordts/cityscapesScripts/issues/55#issuecomment-411486510
# read camera parameters
camera_params = json.loads(camera_file)
depth = camera_params['extrinsic']['baseline'] * camera_params['intrinsic']['fx'] / disp
depth[depth == np.inf] = 0
depth[depth == np.nan] = 0
The final depth map is in 'meters'. Its median value is around 10 m.
After you obtain the depth map, you could use Depth2HHA-python to generate HHA map. Note that some hyper-parameters in that repo are designed for NYU Depth v2 dataset and we need to change something.
-
We should clip the depth value because the 'sky' pixels has too large depth values.
depth = np.minimum(depth, 100) # maximum value is 100 m. -
In function
getHHA(C, D, RD), some hyper-parameters need to adjust according depth ranges of CityScapes dataset.
I[:,:,2] = 20000/pc[:, :, 2]*6 # 31000/pc[:,:,2]
I[:,:,1] = h/20 # height for cityscapes
I[:,:,0] = (angle + 128-90) + 10
@charlesCXK hha = getHHA(camera_matrix, D, RD) what are the RDs for Cityscapes?
@Serge-weihao In Cityscapes, we only have raw depth maps, so D == RD.
Thanks for your attention. You could get the depth map from the official disparity map (in '.png' file) and camera parameters (in '.json' file). Here we show some example code:
disp = cv2.imread(disp_file, cv2.IMREAD_UNCHANGED) # read the 16-bit disparity png file disp = np.array(disp).astype(np.float) disp[disp > 0] = (disp[disp > 0] - 1) / 256 # convert the png file to real disparity values, according to the official documentation. # you could also refer to https://github.com/mcordts/cityscapesScripts/issues/55#issuecomment-411486510 # read camera parameters camera_params = json.loads(camera_file) depth = camera_params['extrinsic']['baseline'] * camera_params['intrinsic']['fx'] / disp depth[depth == np.inf] = 0 depth[depth == np.nan] = 0The final depth map is in 'meters'. Its median value is around 10 m.
After you obtain the depth map, you could use Depth2HHA-python to generate HHA map. Note that some hyper-parameters in that repo are designed for NYU Depth v2 dataset and we need to change something.
- We should clip the depth value because the 'sky' pixels has too large depth values.
depth = np.minimum(depth, 100) # maximum value is 100 m.- In function
getHHA(C, D, RD), some hyper-parameters need to adjust according depth ranges of CityScapes dataset.I[:,:,2] = 20000/pc[:, :, 2]*6 # 31000/pc[:,:,2] I[:,:,1] = h/20 # height for cityscapes I[:,:,0] = (angle + 128-90) + 10
Thank you very much for your help, I will continue to pay attention to your follow-up work . Furthermore, we collected some out door datasets, how should I adjust the hyper-parameters in the Depth2HHA-python to fit our dataset? I would be very grateful if you could solve my problem.
@SunXusheng5200 Hi, if you could understand Chinese, please refer to this issue: https://github.com/charlesCXK/RGBD_Semantic_Segmentation_PyTorch/issues/2
@SunXusheng5200 Hi, if you could understand Chinese, please refer to this issue: #2
十分感谢您的回答,这对我的研究很有帮助!再次感谢!
hi,you paper of the RGB-D can not be opened from the link ???
Hi @charlesCXK , The generated depth img from disparity img exists many unfilled values, have u used some algorithm to fill in the missing value first or directly trained without processing the missing value? If yes , could you share the filling algorithm? Thanks very much !
@TXH-mercury Hi, we didn't use any algorithm to fill in the missing values.
@charlesCXK Hello, would you share the HHA maps of cityscapes depth maps?
@xiaojiangjiangjinger Sorry, we haven't planned to upload the HHA maps of Cityscapes online for the time being. Maybe you could try to convert them following https://github.com/charlesCXK/RGBD_Semantic_Segmentation_PyTorch/issues/1#issuecomment-684875832 😄