LukeG89

Results 83 comments of LukeG89

@aprisma2008 You are missing `llama_cpp_python`, a dependency that Searge LLM needs to run the models. The link provided by Searge node can install it for Python 3.11, but the desktop...

Have you managed on installing llama-cpp-python?

You should try this ComfyUI version, I see there are installation instructions for old cards: https://github.com/patientx/ComfyUI-Zluda

It's not just with GGUF models (and not just Wan). I just made some tests, I was about to open a new bug report, but I guess I could post...

> Do you have a comparison basis for an older version or is it just about the lora performance discrepancy? No, unfortunately I don't have a comparison with previous versions....

> @LukeG89 these options are performance critical, especially pinned memory. If you have a problem with these options left on we can take that as a straight bug report Never...

One of the models is probably wrong, I guess it's the VAE (suggestion: you need to better rename the models)

Try using this VAE: https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/ae.safetensors Here if you need clip L and T5xxl text encoders: https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main

Anyway, LoRA is also not compatible, this is why you have all those `WARNING - lora key not loaded`

Now they moved the option. ### On portable/manual installation #### Go to Settings and search "preview" _____ ### On Desktop app #### Go to Settings > Server-Config and find Preview