ai_anchorite

Results 4 comments of ai_anchorite

Project is looking very promising! A related issue that's likely been mentioned and probably already on your roadmap, but just in case: The LoRA loader needs a scrollbar, as only...

It's a problem between the installed Gradio version and fastapi or something. Just have to downgrade gradio to version 4.43.0 to "fix". In Windows, this has to done with **Anaconda...

That model loading behaviour and low vram during inference is consistent with CPU offload. Inference, including vae decode etc, consistently takes me 20 minutes. less than 64GB RAM may push...

> @ai-anchorite I just found out that for running on a 3090 you can disable CPU offloading (#pipe.enable_sequential_cpu_offload()) - leaving all other mem optimizations enabled. Then ~20GB VRAM are used,...