gogurtenjoyer

Results 14 comments of gogurtenjoyer

Chances are likely that you installed the CUDA version of pytorch outside of InvokeAI's venv, leaving the InvokeAI copy ofp ytorch untouched. The easiest fix for this would be to...

For the mac-specific stuff, I'm not sure - it hasn't happened to me. HOWEVER, there's a fix that people can try, thanks to TimCabbage on Discord: ``` go to the...

Make sure to set the VAE decode to FP32, or else use the 'fixed' FP16 VAE that's available online (sorry, don't have a link).

@jlcases this is something that'll actually happen to everyone using pytorch on MPS - doing what that note says should 'fix' it. InvokeAI's `invoke.sh` does this by default but if...

Hello! Unfortunately, patchmatch isn't working on Mac at the moment, so that error is expected. More info here: https://invoke-ai.github.io/InvokeAI/installation/060_INSTALL_PATCHMATCH/#macintosh

Nope, patchmatch is optional - everything else should work fine. Also, if you do want to use patchmatch on Mac, there's now some instructions here: https://github.com/invoke-ai/InvokeAI/discussions/1893

This is an issue with long prompts and version 2.2.4 of InvokeAI - you can try the prerelease of 2.2.5 which fixes this (truncates the tokens as expected) or just...

Looks like you opened the shell script in a text editor and pasted it here instead of running it in the terminal. To run a shell script, make sure you're...

That error message means that you're out of memory. Without knowing your GPU and VRAM, I can't tell if this is understandable/expected or not.

Please make sure to check the box labelled 'add python.exe to PATH' when in the python installer.