Training on NeRF representation only returns white images for the rendered images and no depth maps.
I have tried to train on the NeRF representation, but the generator only returns white rendered images.
Can someone please help me? Are the weights of the lrm_reconstructor not loaded correctly?
Unfortunately, I don't have the answer to your question. However, since you have reached this step, I would like to ask if you could help me get there as well. How did you create your dataset? I managed to render a dataset with 32 random views, along with normal and depth maps. Now, I am stuck at the point where I have to save the camera positions for each view.
Unfortunately, I don't have the answer to your question. However, since you have reached this step, I would like to ask if you could help me get there as well. How did you create your dataset? I managed to render a dataset with 32 random views, along with normal and depth maps. Now, I am stuck at the point where I have to save the camera positions for each view.
Hi, I would like to ask if you could help me render a dataset with 32 random views, along with normal and depth maps.Thanks a lot!
@SBlumenstock1 If the model cannot render anything during the training process, there is most likely something wrong with the camera poses in your dataset. Please refer to https://github.com/liuyuan-pal/SyncDreamer/blob/main/blender_script.py#L202 which used the same camera pose saving function as mine. To be noted, the function produces a world2cam matrix, please make sure it is inversed into a cam2world matrix in the dataloader.
I've encountered the same issue. I checked your processing in the dataset https://github.com/TencentARC/InstantMesh/blob/main/src/data/objaverse.py#L186 and you've already converted the world-to-camera matrix to the camera-to-world matrix, so it shouldn't be that. I also refer tohttps://github.com/liuyuan-pal/SyncDreamer/blob/main/blender_script.py for rendering the dataset , Could you please let me know where the problem might be?
@bluestyle97 @SBlumenstock1 Thank you!
I have tried to train on the NeRF representation, but the generator only returns white rendered images.
![]()
Can someone please help me? Are the weights of the lrm_reconstructor not loaded correctly?
I'm sorry to bother you. I used the camera pose data from the code along with the corresponding images, setting fx = fx * img_size, fy = fy * img_size, cx = 0.5 * img_size, and cy = 0.5 * img_size. Then, I trained a NeRF model using these parameters, but the training failed. The six images correspond to six separate 3D objects, with the result looking similar to the image below.
Could you provide details on the camera poses you have used?
I've encountered the same issue. I checked your processing in the dataset https://github.com/TencentARC/InstantMesh/blob/main/src/data/objaverse.py#L186 and you've already converted the world-to-camera matrix to the camera-to-world matrix, so it shouldn't be that. I also refer tohttps://github.com/liuyuan-pal/SyncDreamer/blob/main/blender_script.py for rendering the dataset , Could you please let me know where the problem might be?
@bluestyle97 @SBlumenstock1 Thank you!
Have you solved the problem? thank you
I have tried to train on the NeRF representation, but the generator only returns white rendered images.
![]()
Can someone please help me? Are the weights of the lrm_reconstructor not loaded correctly?
I also encountered this problem. The pictures rendered by the model are all white. Have you found a solution?

@bluestyle97 @SBlumenstock1 Thank you!
