lora icon indicating copy to clipboard operation
lora copied to clipboard

Using Low-rank adaptation to quickly fine-tune diffusion models.

Results 109 lora issues
Sort by recently updated
recently updated
newest added
trafficstars

warnings.warn(f"`log_with={log_with}` was passed but no supported trackers are currently installed.") Before training: Unet First Layer lora up tensor([[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.], ...,...

I've always been curious about how LoRA works. For example, if I trained my LoRA parameters on SD-1-4, can I apply it directly to the base model of SD-1-5? According...

changed diffusers 🤗 version

LoRA attention is being used from Huggingface 🤗 diffusers library. They have modified their codebase, and the new version, v0.23.0, is incompatible with this codebase. To install diffusers and to...

https://stability.ai/blog/sdxl-09-stable-diffusion https://github.com/Stability-AI/generative-models any plans? :)

I got this error while running two different scripts: run_lora_db_unet_only.sh and use_face_conditioning_example.sh I only made changes regarding my own data and output directory. Also, for the second one I set...

I'm trying to work in kaggle notebooks. ``` File /opt/conda/lib/python3.10/site-packages/lora_diffusion/lora.py:1012, in patch_pipe(pipe, maybe_unet_path, token, r, patch_unet, patch_text, patch_ti, idempotent_token, unet_target_replace_module, text_target_replace_module) 1010 elif maybe_unet_path.endswith(".safetensors"): 1011 safeloras = safe_open(maybe_unet_path, framework="pt", device="cpu")...

Can someone please help me figure out why inpainting is not working for me while basic image generation seems to be working? I have a folder with 500 images of...

As readme says, export MODEL_NAME="runwayml/stable-diffusion-v1-5" export INSTANCE_DIR="./data/data_disney" export OUTPUT_DIR="./exps/output_dsn" lora_pti \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --output_dir=$OUTPUT_DIR \ --train_text_encoder \ --resolution=512 \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --scale_lr \ --learning_rate_unet=1e-4 \ --learning_rate_text=1e-5...

Directly running the code in the run_inference.ipynb resulted in an error. Does anyone have the same problem and how should solve it?