stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: Extra networks LoRAs can be scanned twice from `Lora` module import
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
Because there's a call to list_available_loras() at the top level of the built-in LoRA module, an extension that imports that module will end up re-scanning the LoRAs twice
Steps to reproduce the problem
- Install https://github.com/vladmandic/sd-extension-system-info
- Notice that LoRAs are scanned twice on startup
What should have happened?
The call to scan LoRAs should go in an event handler instead, to prevent duplicate work
Commit where the problem happens
a9fed7c3
What platforms do you use to access the UI ?
Windows
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
--opt-sdp-attention --disable-nan-check --listen --enable-insecure-extension-access --gradio-inpaint-tool color-sketch --allow-code --no-half-vae --lora-dir G:/model/lora
List of extensions
https://github.com/vladmandic/sd-extension-system-info
Console logs
Launching Web UI with arguments: --opt-sdp-attention --disable-nan-check --listen --enable-insecure-extension-access --gradio-inpaint-tool color-sketch --allow-code --no-half-vae --lora-dir G:/model/lora --opt-sdp-attention --disable-nan-check --listen --enable-insecure-extension-access --gradio-inpaint-tool color-sketch --allow-code --no-half-vae --lora-dir G:/model/lora
Gradio version: 3.22.1
No module 'xformers'. Proceeding without it.
Error setting up CodeFormer:
Traceback (most recent call last):
File "E:\sd\modules\codeformer_model.py", line 38, in setup_model
from facelib.utils.face_restoration_helper import FaceRestoreHelper
File "E:\sd\repositories\CodeFormer\facelib\utils\__init__.py", line 2, in <module>
from .misc import img2tensor, load_file_from_url, download_pretrained_models, scandir
File "E:\sd\repositories\CodeFormer\facelib\utils\misc.py", line 8, in <module>
import gdown
ModuleNotFoundError: No module named 'gdown'
Scanning LoRAs...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2891/2891 [00:09<00:00, 299.59it/s] [AddNet] Updating model hashes...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3240/3240 [00:00<00:00, 13627.83it/s] [AddNet] Updating model hashes...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3240/3240 [00:00<00:00, 14399.76it/s] Loading weights [3fd0aca149] from E:\sd\models\Stable-diffusion\Kukicha.safetensors
Creating model from config: E:\sd\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: E:\sd\models\Stable-diffusion\animefull-final-pruned.vae.pt
Applying cross attention optimization (Doggettx).
Model loaded in 4.0s (create model: 0.5s, apply weights to model: 0.7s, apply half(): 0.4s, load VAE: 0.4s, move model to device: 1.0s, load textual inversion embeddings: 1.0s).
Building extra networks UI for txt2img tab...
Scanning LoRAs...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2891/2891 [00:09<00:00, 313.85it/s]
Additional information
No response