threestudio icon indicating copy to clipboard operation
threestudio copied to clipboard

A unified framework for 3D content generation.

Results 260 threestudio issues
Sort by recently updated
recently updated
newest added

``` python launch.py --config configs/instructnerf2nerf.yaml --train --gpu 1 data.dataroot="data/bear" data.camera_layout="front" data.camera_distance=1 system.prompt_processor.prompt="Turn the bear into a grizzly bear" \ trainer.val_check_interval=100 \ data.eval_data_interval=15 \ system.start_editing_step=1000 \ trainer.max_steps=10000 \ ``` ![image](https://github.com/threestudio-project/threestudio/assets/79900945/4438c821-f50b-4fd0-bc2b-f52c27997b68) ![image](https://github.com/threestudio-project/threestudio/assets/79900945/95054124-cda7-46fa-b1bf-2dfa59ddb7d8)...

Hello, I was just trying to run all steps of the given prolificdreamer commands of the Colab. The first one worked as intended: ``` # Train !python launch.py --config configs/prolificdreamer.yaml...

Hi, One suggestion [Wonder3D ](https://github.com/xxlong0/Wonder3D) is very good repo out there for the Single Image to 3D model. Add it to this framework if you can. Thanks

What is the structure of the pretrained file of zero123-unified-guidance? The zero123-unified-guidance use pretrained_model_name_or_path of diffusers, corresponding file examples should be provided of "bennyguo/zero123-diffusers"

I've been testing various networks on human 2D-to-3D task, on Yoga-82 dataset. Out-of-the-box, Zero123/Magic123 don't seem to output good enough results for the use case of 2D dataset augmentation through...

I am running IN2N on a custom capture. I first used NerfStudio to pre-process the data as illustrated [here](https://docs.nerf.studio/en/latest/quickstart/custom_dataset.html). However when used the processed data in IN2N pipeline: 1) First...

I still hope to check if anyone makes SDXL guidance work. I implemented one but it did not work. The major changes I made was the prompt process part as...

I feel that the generation speed is a bit slow. My A6000 machine runs the official requirement of 10000 it, which takes about 30 minutes. Based on my simple values,...

I tested some of my own prompt words using dreamfusion if, and after 10000 steps of iteration, the results were much worse than those given in the official project. I...

When training the Fantasia3D texture phase with multiple GPUs, self.light.diffuse and self.light.specular constructed in funcion build_mips() are always on cuda:0, causing the RuntimeError: texture_fwd_mip(): Inputs tex, uv must reside on...