unsupervised-depth-completion-visual-inertial-odometry
                                
                                 unsupervised-depth-completion-visual-inertial-odometry copied to clipboard
                                
                                    unsupervised-depth-completion-visual-inertial-odometry copied to clipboard
                            
                            
                            
                        How to generate colored and back-projected to 3-D?
Hi, how did you generate the predicted depth map of the kitti dataset as colored and back-projected to 3-D? I think the net_utils file seems to be doing this, but how is it called in kitti? Thank you very much for any reply.
You can follow the general instructions in this thread of our recent work KBNet: https://github.com/alexklwong/calibrated-backprojection-network/issues/17#issuecomment-1176431551
This repo is in Tensorflow so it is a bit harder to use for the general purpose of creating point clouds. A PyTorch version is coming soon.
In the meantime, here is the PyTorch version of the function: https://github.com/alexklwong/calibrated-backprojection-network/blob/master/src/net_utils.py#L1638-L1667