Unique3D icon indicating copy to clipboard operation
Unique3D copied to clipboard

推理第五阶段总是直接崩溃,求助

Open hpx502766238 opened this issue 6 months ago • 4 comments

控制台日志如下: (venv) PS E:\WSL\2M\Unique3D> python app/gradio_local.py --port 7860 Warning! extra parameter in cli is not verified, may cause erros. Loading pipeline components...: 100%|████████████████████████████████████████████████████| 5/5 [00:00<00:00, 29.76it/s] You have disabled the safety checker for <class 'custum_3d_diffusion.custum_pipeline.unifield_pipeline_img2mvimg.StableDiffusionImage2MVCustomPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 . Warning! extra parameter in cli is not verified, may cause erros. E:\WSL\2M\Unique3D\venv\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True. warnings.warn( Loading pipeline components...: 100%|██████████████████████████████████████████████████| 5/5 [00:00<00:00, 2175.92it/s] You have disabled the safety checker for <class 'custum_3d_diffusion.custum_pipeline.unifield_pipeline_img2img.StableDiffusionImageCustomPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 . E:\WSL\2M\Unique3D\venv\lib\site-packages\torch\utils\cpp_extension.py:1967: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST']. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:04<00:00, 1.33it/s] Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference. Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference. Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference. Loading pipeline components...: 100%|██████████████████████████████████████████████████| 6/6 [00:00<00:00, 5932.54it/s] Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). E:\WSL\2M\Unique3D\venv\lib\site-packages\transformers\models\clip\modeling_clip.py:480: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.) attn_output = torch.nn.functional.scaled_dot_product_attention( 0%| | 0/30 [00:00<?, ?it/s]Warning! condition_latents is not None, but self_attn_ref is not enabled! This warning will only be raised once. 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:04<00:00, 6.82it/s] 100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:10<00:00, 1.03s/it] 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:30<00:00, 1.02s/it] 0%| | 0/200 [00:00<?, ?it/s]E:\WSL\2M\Unique3D\venv\lib\site-packages\torch\utils\cpp_extension.py:1967: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST']. warnings.warn( E:\WSL\2M\Unique3D.\mesh_reconstruction\remesh.py:354: UserWarning: Using torch.cross without specifying the dim arg is deprecated. Please either pass the dim explicitly or simply use torch.linalg.cross. The default value of dim will change to agree with that of linalg.cross in a future release. (Triggered internally at ..\aten\src\ATen\native\Cross.cpp:66.) n = torch.cross(e1,cl) + torch.cross(cr,e1) #sum of old normal vectors 100%|████████████████████████████████████████████████████████████████████████████████| 200/200 [00:07<00:00, 25.91it/s] 0%| | 0/100 [00:00<?, ?it/s] (venv) PS E:\WSL\2M\Unique3D>

hpx502766238 avatar Jul 29 '24 16:07 hpx502766238