Jiale Xu
Jiale Xu
你的环境是多卡的吗?运行`python -c "import torch; print(torch.cuda.device_count())"`看看?
@xxlong0 @flamehaze1115 Hi, since Wonder3D used orthogonal camera for rendering, there should be a `ortho_scale` parameter for controllong the scale of the camera view right? How did you set it?
Training on NeRF representation only returns white images for the rendered images and no depth maps.
@SBlumenstock1 If the model cannot render anything during the training process, there is most likely something wrong with the camera poses in your dataset. Please refer to [https://github.com/liuyuan-pal/SyncDreamer/blob/main/blender_script.py#L202](https://github.com/liuyuan-pal/SyncDreamer/blob/main/blender_script.py#L202) which used...
You need to train instant-nerf first. The mesh-based rendering can only provide gradients at near-surface area, making the network hard to converge. Our mesh model is finetuned from the nerf...
I'm sorry I have no idea on this problem.
Maybe you can try some post-processing methods like [this](https://github.com/magic-research/magic-boost).
Same problem here.
I think it's possible. You can first reconstruct the object and then align the scale of the rendered depth map with the real depth.
Wow, it looks like a fantastic work, looking forward to the code release!