stable-diffusion-webui-amdgpu-forge icon indicating copy to clipboard operation
stable-diffusion-webui-amdgpu-forge copied to clipboard

cannot generate anything

Open Gentleman2292 opened this issue 1 year ago • 1 comments

CUDA Using Stream: True Using pytorch cross attention Using pytorch attention for VAE ControlNet preprocessor location: C:\Users\x\Code projects\stable-diffusion-webui-forge-on-amd\models\ControlNetPreprocessor Loading additional modules ... done. 2024-12-07 10:50:28,472 - ControlNet - INFO - ControlNet UI callback registered. Model selected: {'checkpoint_info': {'filename': 'C:\Users\x\Code projects\stable-diffusion-webui-forge-on-amd\models\Stable-diffusion\f222.ckpt', 'hash': '44bf0551'}, 'additional_modules': [], 'unet_storage_dtype': None} Using online LoRAs in FP16: False Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 100.3s (prepare environment: 69.8s, launcher: 0.7s, import torch: 12.9s, initialize shared: 1.0s, other imports: 0.9s, load scripts: 2.1s, initialize google blockly: 9.5s, create ui: 2.0s, gradio launch: 1.3s). Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False} [GPU Setting] You will use 89.98% GPU memory (9200.00 MB) to load weights, and use 10.02% GPU memory (1024.00 MB) to do matrix computation. Loading Model: {'checkpoint_info': {'filename': 'C:\Users\x\Code projects\stable-diffusion-webui-forge-on-amd\models\Stable-diffusion\f222.ckpt', 'hash': '44bf0551'}, 'additional_modules': [], 'unet_storage_dtype': None} [Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done. StateDict Keys: {'unet': 686, 'vae': 248, 'text_encoder': 197, 'ignore': 0} Working with z of shape (1, 4, 32, 32) = 4096 dimensions. K-Model Created: {'storage_dtype': torch.float16, 'computation_dtype': torch.float16} Model loaded in 3.2s (unload existing model: 0.2s, forge model load: 3.0s). [Unload] Trying to free 1329.14 MB for cuda:0 with 0 models keep loaded ... Done. [Memory Management] Target: JointTextEncoder, Free GPU: 8056.27 MB, Model Require: 234.72 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: 6797.56 MB, All loaded to GPU. Moving model(s) has taken 0.06 seconds

Gentleman2292 avatar Dec 07 '24 07:12 Gentleman2292

Full log please and what exactly you tried to generate ? which type of model you tried etc.

TheFerumn avatar Dec 07 '24 17:12 TheFerumn