Grauho

Results 36 comments of Grauho

> I tried using t5xxl q3 gguf but it's not supported please add t5xxl gguf support. > > Also does flux gguf get qauntized again? I tried loading it but...

Based on the log you posted my initial thought is that some of the LoRAs you're using have a non-standard naming convention so when they looks for the corresponding tensors...

> Hmm could be... I'll try replacing "up" and "down" in the code with "A" and "B" and see how it goes. Another option is just to run a quick...

> > > Hmm could be... I'll try replacing "up" and "down" in the code with "A" and "B" and see how it goes. > > > > > >...

To test this try using: `grep -l "lora_[AB]" ` on the LoRAs you used for the initial run and see if the names printed correspond to the ones that didn't...

Yeah it looks like perhaps some of the new flux LoRA training scripts have decided to use a different variant of the diffusers naming convention, probably won't be too bad...

Did you compile with the appropriate settings for your GPU?

Yes, if you're trying to use it with a CUDA enabled graphics card you do want to build it with: `cmake .. -DSD_CUBLAS=ON` `cmake --build . --config Release` as well...

> @grauho thanks for the help. By CUDA tool chain set you mean CUDA toolkit? as im using kaggle notebook CUDA toolkit is properly installed in it. No problem. Yep...

> @grauho i have tried cmake .. -DSD_CUBLAS=ON cmake --build . --config Release and it gives error `/home/wiredhikari/flux-api/stable-diffusion.cpp/model.cpp:705:0: required from here /usr/include/c++/13/bits/stl_tree.h:2131:14: internal compiler error: Segmentation fault 2131 | return...