NeedsMoar

Results 138 comments of NeedsMoar

Try downloading and installing this : https://github.com/NeedsMoar/flash-attention-2-builds/releases/download/v2.3.6-flash-attention/flash_attn-2.3.6-cp311-cp311-win_amd64.whl It's old but I have it installed and have never seen that message. If it breaks something even more just remove it. I'm...

There's a bigger cluster in that flash_attn is in the same repo as flash_attn-2 but not built by default and a different codebase > > Try downloading and installing this...

Probably. Controlnet might make assumptions that only apply to models trained with the koyha-ss scripts (new weights they insert maybe?) since pretty much nobody runs the base model. If nothing...

That sounds like the nodes are printing the output with the python format modifier on the string but no variable. Some nodes parse strings as expressions that can contain math...

Your negative embeddings need to be written with "embedding:" in front of them, which could be causing a lot of difference since you're using 3 negative embeddings. Also, most LoRAs...

Also if you need to turn off LoRAs, right click them and click bypass rather than setting them to 0.0. If you use the arrows to set them you might...

You should probably fix the title and link, there's no model from MIT called MDM. Right now it reads like DMD has been implemented somewhere and MDM is new ("After...

One side effect of their difference based training is that CFG has to be at 1. Other values just produce double exposures with additional images as they increase. I couldn't...

Prompts all converge towards "Young asian woman wearing black jacket and jeans with no shirt on under it and nipples censored" as CFG approaches zero, btw. It also seems like...

I think he wants it to convert directly into base64 of a png without the intermediate step of the binary png which seems technically possible but questionably useful. You'd need...