stable-dreamfusion
stable-dreamfusion copied to clipboard
Generation without texture
Description
https://user-images.githubusercontent.com/79019929/232335401-4b67e31e-7d7b-4423-abba-85c55be55246.mp4
https://user-images.githubusercontent.com/79019929/232335427-41c5d2b2-1bf3-4f70-85c4-a65a9a9b8297.mp4
Steps to Reproduce
Here are some of my result generated Steps are like
- !python main.py --text "a hamburger" --workspace trial -O --vram_O
- !python main.py --workspace trial -O --test
- !python main.py --workspace trial -O --test --save_mesh
Expected Behavior
But there is not so much texture and details on the result. How can I fix this result and generate a more high quality result
Environment
Ubuntu22.04, NVDA gpu
I would like to ask what is the difference between using DMT net fine-tune or just simply training and testing
It's very usual for the result to be like this, because the training SDS loss is calculated on a low resolution NeRF output (usually 64*64). By contrast, DMTet is a more memory efficient way to represent 3D scence than NeRF, thus it allows to train in a much higher resolution. Detailed explanation could be found on this paper: Magic3D.
It's very usual for the result to be like this, because the training SDS loss is calculated on a low resolution NeRF output (usually 64*64). By contrast, DMTet is a more memory efficient way to represent 3D scence than NeRF, thus it allows to train in a much higher resolution. Detailed explanation could be found on this paper: Magic3D.
So does it mean that we need to use DMTet to fine-tune the result after the SDS training? Or they are parallel approach
Yep, it's a two-stage procedure (from coarse to fine) instead of a parrallel one. If you want to train a DMTet from scratch, I recommend you to check out this paper: Fantasia3D. However, the code has not been released yet, so the paper can only be a reference in theroy.
Yep, it's a two-stage procedure (from coarse to fine) instead of a parrallel one. If you want to train a DMTet from scratch, I recommend you to check out this paper: Fantasia3D. However, the code has not been released yet, so the paper can only be a reference in theroy.
It seems the DMTet fail when I feed the training result to it Code
!python main.py -O --text "a rex rabbit with cute tail" \
--workspace results/rexrabbit_HQ_1_DMTet --dmtet --iters 500\
--init_ckpt results/rexrabbit_HQ_1/checkpoints/df_ep0100.pth
I don't quite understand why the Ball will generated instead of the rabbit
Good question. I also often failed using DMTet lol, it seems the finetuning to be not stable enough. I'm also wondering why this happens. My case is that the result will eventually lose its color. I found the coloring part is still using a nerf result, maybe change the color part to DMTet by utilizing the vertex color will help. But I seldom lose the shape using a DMTet. That's really strange. Maybe we use it in the wrong way lmao
Good question. I also often failed using DMTet lol, it seems the finetuning to be not stable enough. I'm also wondering why this happens. My case is that the result will eventually lose its color. I found the coloring part is still using a nerf result, maybe change the color part to DMTet by utilizing the vertex color will help. But I seldom lose the shape using a DMTet. That's really strange. Maybe we use it in the wrong way lmao
LMAO, what is your common line for calling the DMTet. I wonder if my epoch is too small. but I do follow the guideline. Are you using the latest epoch of the training steps as the init kept in DMTet
Maybe need to see if the author has any idea @ashawkey
Maybe it's related to randomness or prompt, you can try different seeds, or the examples under scripts
(like this https://github.com/ashawkey/stable-dreamfusion/blob/main/scripts/run2.sh) to check if these prompts can generate good shapes.
Maybe it's related to randomness or prompt, you can try different seeds, or the examples under
scripts
(like this https://github.com/ashawkey/stable-dreamfusion/blob/main/scripts/run2.sh) to check if these prompts can generate good shapes.
Ok I will try it. Are you using the initial ckpt instead of the epoch100 ckpt. I saw that your script are using the init ckpt
df.pth
is always the last checkpoint we saved, so I guess it's the same.