araleza
araleza
Hey, thanks for providing this super resolution network, it's produced some great output for me. I do have an issue to report though. As well as upsampling the image, the...
### Describe the bug Hi, I 'git clone'd a fresh checkout of webui, and then ran ./start_linux.sh. It installed some stuff, but then failed with: ``` ******************************************************************* * WARNING: You...
So I have a GPTQ llama model I downloaded (from TheBloke), and it's already 4 bit quantized. I have to pass in False for the load_in_4bit parameter of: ``` model,...
Looking at the DoRA paper, it seemed to get impressive results:  From: https://arxiv.org/pdf/2402.09353 There even seemed to be some indications that QDoRA could outperform full fine tuning:  Is...
If you finetune SDXL base with: ``` --train_text_encoder --learning_rate_te1 1e-10 --learning_rate_te2 1e-10 --fused_backward_pass ``` Then it will train fine. But if you stop training and restart by training from the...