fastsdcpu
fastsdcpu copied to clipboard
CivitAI checkpoints?
Hi,
Is it possible to use civitAI models? I noticed the format of the models are from huggingface "user_name/model_name". How about if I want to add models from civit AI? I tried using one of my predownloaded model and edited the config with "home/user/path/to/model/juggernaut.safetensor (not the exact path)". It appeared in the selection but I got error saying the model is not supported or something. Are we only able to use the listed models? Thanks!
I am also at a loss to understand how I can use .safetensors for LORA and Models from external sources.
If I try and change the config .txt file, I see errors like:
FileNotFoundError: [Errno 2] No such file or directory: '/home/user/.cache/huggingface/hub/models--somemodel--somemodel_3/refs/main'
If I put in a full path to a safetensors file, it will have more than one / in the path and gets a different error because of that
It seems that we can maybe only use huggingface models? Or do I have to write some python to get e.g. 'somemodel' to work?
It may well be that I'm not understanding something here about how this works. I'm very new to SD because I thought, until now, that I couldn't run Stable Diffusion without a GPU. Yet here this project is :)
You need to use the diffusers model format eg : https://huggingface.co/stablediffusionapi/dreamshaper-v8/
@SmoothBrainApe @processor286 Lora support added you can use civitai lora models (.safetensors format) Latest release: https://github.com/rupeshs/fastsdcpu/releases/tag/v1.0.0-beta.25 How to use lora models : https://github.com/rupeshs/fastsdcpu?tab=readme-ov-file#how-to-use-lora-models
Yaay, thanks a lot, that really safed my workflow. I really appreciate it. Continue like this :)
Am I reading correctly that fastsdcpu can now read safetensors for lora models but the base models still need to be in diffusers model format? If so is there an easy way to convert one to the other?