ukaprch
ukaprch
I have Windows 10 and I was receiving problems as well with this module. I had to make more changes to make it work. This function is mixing datatypes so...
From my limited experience a 'nan' returned on a loss function may be due to a cuda out of memory condition and/or a variable requires grad=True was not set prior...
MultiMask=True by definition as used in regular SAM produces the 3 highest scored images. So why can't HQ do the same?
The following locally stored quantized objects load 100% faster than quantizing them on the fly: #1 Do this once: Quantize the transformer and TF encoder in Flux: from diffusers import...
Just to let you all know, this change breaks those using Quanto instead of BitsandBytes. Yeah, I know, Quanto seems to be the ugly duckling of quantizers. Before this update...
It's about time. Thanks.
FWIW, I have been successful in using the same T5 encoder for WAN 2.1 for this model just by fiddling with their pipeline: print('Quantize text_encoder qint8') class QuantizedT5EncoderModelForCausalLM (QuantizedTransformersModel): auto_class...
[zohebnsr](https://github.com/zohebnsr) The model is fixed at 384 and then resizes, but you can upscale vs resizing for a better image.
Thank you for creating this issue. I am also in this trouble. I downgraded the version to v4.41.0, then the problem was solved
Do we have a status on this issue? All I see is this issue moving around doing nothing.