stable-diffusion-webui-forge
stable-diffusion-webui-forge copied to clipboard
[Bug]: Not all Loras or LyCORIS showing up
Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
What happened?
When not all loras are showing up in the tab
Steps to reproduce the problem
Installation like in the Doc
What should have happened?
Show all Loras
What browsers do you use to access the UI ?
Google Chrome
Sysinfo
Console logs
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f0.0.12-latest-153-gb50d978e
Commit hash: b50d978e1b730d6836cf49bd21099ab3712a27fa
Installing xformers
Installing Miaoshou assistant requirement: gpt_index==0.4.24
Installing Miaoshou assistant requirement: langchain==0.0.132
Installing Miaoshou assistant requirement: gradio_client==0.5.0
Installing Miaoshou assistant requirement: requests==2.31.0
Installing Miaoshou assistant requirement: urllib3==2.0.6
Installing Miaoshou assistant requirement: tqdm==4.64.0
Launching Web UI with arguments: --no-download-sd-model --xformers --no-half-vae --api --port 7861
Total VRAM 8192 MB, total RAM 32691 MB
WARNING:xformers:A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
xformers version: 0.0.23.post1
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3070 : native
VAE dtype: torch.bfloat16
D:\webui_forge\system\python\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
rank_zero_deprecation(
Using xformers cross attention
ControlNet preprocessor location: D:\webui_forge\webui\models\ControlNetPreprocessor
Civitai Helper: Get Custom Model Folder
logs_location: D:\webui_forge\webui\extensions\miaoshouai-assistant\logs
CivitAI Browser+: Aria2 RPC started
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [1681bf15ca] from D:\webui_forge\webui\models\Stable-diffusion\js2prony_v10.safetensors
2024-02-14 14:38:21,816 - ControlNet - INFO - ControlNet UI callback registered.
Civitai Helper: Set Proxy:
model_type EPS
UNet ADM Dimension 2816
Running on local URL: http://127.0.0.1:7861
To create a public link, set `share=True` in `launch()`.
Startup time: 76.1s (prepare environment: 36.1s, import torch: 10.4s, initialize shared: 0.2s, other imports: 1.2s, load scripts: 8.2s, create ui: 13.7s, gradio launch: 2.9s, add APIs: 2.7s, app_started_callback: 0.5s).
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}
Loading VAE weights specified in settings: D:\webui_forge\webui\models\VAE\xlVAEC_c9.safetensors
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.60 seconds
Model loaded in 38.1s (load weights from disk: 1.3s, forge instantiate config: 2.7s, forge load real models: 32.1s, load VAE: 0.4s, calculate empty prompt: 1.5s).
Additional information
No response
The webui will by default only display LoRAs and networks that actually work with the model loaded. Does enabling the Always show all networks on the Lora page
setting help?
Related issue: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13134#issuecomment-1710953757
Thx.