C

Results 70 comments of C

> Hi, > > I found an issue that if we compile and trace the model with resolution divisible by 32, after that if we inference the model with resolution...

@HoiM I guess you should build a DeepCache pipeline manually. Try compiling the model at first, then use `DeepCacheSDHelper` to enable DeepCache https://github.com/horseee/DeepCache/blob/master/main.py

@blacklig Do you have cuda toolkit installed? But always remember training is only experimental and won't bring any significant improvements so far. Currently `stable-fast`'s main advantage is in inference.

@blacklig see examples or this colab: https://colab.research.google.com/github/camenduru/stable-fast-colab/blob/main/stable_fast_colab.ipynb

Instead of deepcopy the model, I would suggest save the model into a file-like object since that's the standard way which `diffusers` use.

> Hello, I see that in the read me that `stable-fast` supports LoRAs out of the box. Does anyone know if it's possible to use Lycoris LoRAs with this package?...

> I get the same error under windows. It runs, but extremely slowly. Inference takes 40s per iteration on my 1050 ti, 1.5s without stable-fast. > > stable-fast Nightly Release...

> Reading up, it's a whole family of algorithms similar to lora: > > https://github.com/KohakuBlueleaf/LyCORIS/blob/main/docs/Algo-List.md > > The diffusers framework mentions partial support for some of them which means they...

@xziayro You can use a tensor to contain your regional prompt. That should not trigger recompilation.

> cc: @bertmaher , I'm guessing this is using the new triton 3.0 branch? Not sure how to tell which triton hash is being used there. Thank you for your...