LiDAR-Diffusion
LiDAR-Diffusion copied to clipboard
[CVPR 2024] Official implementation of "Towards Realistic Scene Generation with LiDAR Diffusion Models"
can you share a document how to convert the diffusion model into onnx format?
Hi authors, thanks a lot for your excellent work! Can you also release the code and pretrained weights on nuScenes dataset? It would be really helpful!
Can you please share config file for layout(bounding box) to lidar ?
Thank you for your solid work! I wonder that during your training process of the Camera-to-LiDAR task, is the pretrained CLIP (https://github.com/openai/CLIP) frozen or not? Thank you!
Hello authors, thanks a lot for this excellent work! I got a problem while executing code for Text-to-LiDAR task by the command you provided. `CUDA_VISIBLE_DEVICES=0 python scripts/text2lidar.py -r models/lidm/kitti/cam2lidar/model.ckpt -d...
There is the [-1, 1] -> [0, 1] transformation done twice in a row, when transforming from range image to point-cloud in the GeoConverter, which serves the geometric loss. Excerpt...
I encountered an issue while trying to use geo_rec_loss in my training process. I modified the LiDAR-Diffusion/configs/autoencoder/kitti/autoencoder_c2_p4.yaml file by changing the geo_factor from 0 to 1. After encountering input shape...
Thank you for your great work. How do you train cam2Lidar? I trained autoencoder and LiDM, but why don't the weight files match? Does the weight model of Cam2Lidar need...
Thank you very much for your great work. When reproducing the model, we encountered data issues. Did you use the contents of data_2d_raw and data_3d_raw when using KITTI data for...
`` def load_pretrained_weights(self, path): w_dict = torch.load(path + "/backbone", map_location=lambda storage, loc: storage) self.backbone.load_state_dict(w_dict, strict=True) w_dict = torch.load(path + "/segmentation_decoder", map_location=lambda storage, loc: storage) self.decoder.load_state_dict(w_dict, strict=True) `` thanks for the...