FNSpd
FNSpd
> do we have a mode to do something like 50% iteration control? lets say we have 50 sampling steps, perhaps we can just control the first 25 steps. There's...
Using "only masked" option in inpaint tab wouldn't be as resource heavy as doing hires fix. It essentially crops masked area to whatever resolution is set and works only on...
> > Using "only masked" option in inpaint tab wouldn't be as resource heavy as doing hires fix. It essentially crops masked area to whatever resolution is set and works...
Also, don't know if it's worth noting but setting "--precision full" results in "expected Float but found Half"
> what model is being used? tried to switch model? > > also is xformers installed? Original 1.5, tried switching models. Will try to experiment with it a little more....
Not sure if this can help somehow but it seems like error happens during calculating x_out in sd_samplers_kdiffusion. None of variables before calculating are NaN but output gives tensor full...
Managed to get it working partially (still slower than --no-half but faster than without it). Left benchmark enabled, enabled --upcast-sampling and --precision full. TIs and hypernetworks work but LoRAs throw...
Solved LoRA problem by adding "input = devices.cond_cast_unet(input)" in the beginning of lora_forward function. It now works but generation becomes slower with LoRAs. I've seen some people reporting slow down...
Does this happen only with this model or with all models?
Is this with my changes or with current version using --upcast-sampling? Also, which PyTorch version do you have? I didn't test on 2.0. Only 1.13.1