kiui

Results 260 comments of kiui

You could simply use the `--save_mesh` option to export the mesh, but it usually looks worse then the nerf renderings.

@raspiduino Hi, this is simply because there is no publicly available pretrained checkpoint for Imagen (In fact stable-diffusion is the only large pretrained text-to-image model we can access).

CLIP guidance is in fact the previous work dreamfields, and its quality is indeed worse. You could find some good examples here: https://github.com/shengyu-meng/dreamfields-3D

@lcysonya Hi, it's simply not implemented right now. Plane light is simpler, and it should have a similar effect.

@Ainaemaet Hi, I'll try to update as long as I make progress in improving quality.

@vishalghor Hi, you could try to uncomment [these lines](https://github.com/ashawkey/stable-dreamfusion/blob/main/nerf/utils.py#L115) to make torch more deterministic at the cost of speed, but I still cannot assure it could provide exactly the same...

@yijicheng Hi, you could add extra loss [here](https://github.com/ashawkey/stable-dreamfusion/blob/main/nerf/utils.py#L387).

@gaoalexander Hi, please check the original paper's SDS loss part. The gradient through diffusion model's u-net is omitted.

@SOTAMak1r Hi, if you updated from a previous commit, you could try to reinstall gridencoder with `pip install ./gridencoder`.

@phymhan Hi, in fact the paper mentions weights in two place, you may check https://github.com/ashawkey/stable-dreamfusion/pull/9 and https://github.com/ashawkey/stable-dreamfusion/issues/29 for some old discussions.