Loras tab empty (also ---lora-dir not working)
I cannot use Loras on Ubuntu 22.04.4 in the latest version. The Lora tab does not show files of the dir "models/Lora" or of the dir provided via -startup param "--lora-dir"
commit 8a04293430af3b80760aa0065219256ce0bccc34 Ubuntu 22.04.4 LTS x86_64 RTX 4060 Ti
Startup log:
(forge_venv) user@user-linux:~/development/stable-diffusion-webui-forge$ ./webui.sh
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on user user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
python venv already activate or run without venv: /home/user/development/stable-diffusion-webui-forge/forge_venv
################################################################
################################################################
Launching launch.py...
################################################################
glibc version is 2.35
Check TCMalloc: libtcmalloc_minimal.so.4
libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4
Python 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]
Version: f2.0.1v1.10.1-previous-313-g8a042934
Commit hash: 8a04293430af3b80760aa0065219256ce0bccc34
Legacy Preprocessor init warning: Unable to install insightface automatically. Please try run `pip install insightface` manually.
Launching Web UI with arguments: --listen --ckpt-dir '~/development/ComfyUI/models/checkpoints' --lora-dir '~/development/ComfyUI/models/loras'
Total VRAM 15952 MB, total RAM 64221 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4060 Ti : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: False
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: /home/user/development/stable-diffusion-webui-forge/models/ControlNetPreprocessor
2024-08-17 11:09:27,912 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': '/home/user/development/stable-diffusion-webui-forge/models/Stable-diffusion/checkpoints/flux/flux1-dev-fp8.safetensors', 'hash': 'be9881f4'}, 'additional_modules': [], 'unet_storage_dtype': None}
Running on local URL: http://0.0.0.0:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 12.2s (prepare environment: 2.5s, launcher: 2.0s, import torch: 2.5s, initialize shared: 0.5s, other imports: 1.0s, load scripts: 1.2s, create ui: 1.6s, gradio launch: 1.0s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
I dont have Lora tab anymore. Windows.
When I use the --lora-dir option, the LoRA tab only shows the LoRA models in the folder specified by --lora-dir. But when I remove the --lora-dir option, it can read the LoRA models from the models\Lora folder. I've tried to enable "Always show all networks on the Lora page," but the issue persists. However, there’s no such problem with the --ckpt-dir and --vae-dir options, models from both folders show up.
startup log:
venv "C:\Users\alpha\Python\stable-diffusion-webui-forge2\venv\Scripts\Python.exe"
Python 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-313-g8a042934
Commit hash: 8a04293430af3b80760aa0065219256ce0bccc34
Launching Web UI with arguments: --theme dark --ckpt-dir 'C:\Users\alpha\Python\stable-diffusion-webui-forge\models\Stable-diffusion' --lora-dir 'C:\Users\alpha\Python\stable-diffusion-webui-forge\models\Lora' --embeddings-dir 'C:\Users\alpha\Python\stable-diffusion-webui-forge\embeddings' --vae-dir 'C:\Users\alpha\Python\stable-diffusion-webui-forge\models\VAE'
Total VRAM 4096 MB, total RAM 15613 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3050 Laptop GPU : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: False
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: C:\Users\alpha\Python\stable-diffusion-webui-forge2\models\ControlNetPreprocessor
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
2024-08-18 09:19:48,359 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'C:\\Users\\alpha\\Python\\stable-diffusion-webui-forge2\\models\\Stable-diffusion\\flux1-dev-nf4-unet.safetensors', 'hash': 'd9cb14c6'}, 'additional_modules': [], 'unet_storage_dtype': None}
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 16.8s (prepare environment: 2.6s, launcher: 2.1s, import torch: 5.5s, initialize shared: 0.5s, other imports: 0.9s, load scripts: 2.0s, create ui: 2.0s, gradio launch: 1.3s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
I am not familiar with the codebase, but I could found out, that the path for the --lora-dir is not correctly handled:
I added a log to process_network_files:
def process_network_files(names: list[str] | None = None):
print(f"process_network_files: {shared.cmd_opts.lora_dir}")
candidates = list(shared.walk_files(shared.cmd_opts.lora_dir, allowed_extensions=[".pt", ".ckpt", ".safetensors"]))
for filename in candidates:
if os.path.isdir(filename):
continue
name = os.path.splitext(os.path.basename(filename))[0]
# if names is provided, only load networks with names in the list
if names and name not in names:
continue
try:
entry = network.NetworkOnDisk(name, filename)
except OSError: # should catch FileNotFoundError and PermissionError etc.
errors.report(f"Failed to load network {name} from {filename}", exc_info=True)
continue
available_networks[name] = entry
if entry.alias in available_network_aliases:
forbidden_network_aliases[entry.alias.lower()] = 1
available_network_aliases[name] = entry
available_network_aliases[entry.alias] = entry
This prints:
process_network_filesprocess_network_files: /home/user/development/stable-diffusion-webui-forge/~/development/ComfyUI/models/loras
but I would expect to only have the provided path ~/development/ComfyUI/models/loras like it seems to be done for the "--ckpt-dir" paramter
okay, I could fix my issue by not using ~ as alias for /home/user and just providing the absolute path(in my case: path /home/user/development/ComfyUI/models/loras)