InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

fix root-finding for conda users

Open jli opened this issue 3 years ago • 2 comments

PR #1948 added logic for finding the runtime root directory relative to $VIRTUAL_ENV, but conda users dont have this env var set. This changes the default root directory to be relative to the globals.py module.

I use conda for my InvokeAI environment and after pulling main, I was getting errors running scripts/invoke.py because it couldn't find models at the default ~/invokeai location.

(btw, https://github.com/invoke-ai/InvokeAI/blob/main/README.md#contributing says that PRs should be against the development branch, but it seemed like most recent PRs were just against main. Is merging into main ok? Should development PRs just be used for larger/riskier changes?)

jli avatar Dec 14 '22 02:12 jli

@jli First, thanks for contributing! Yes main is perfect - We've made a shift to merging to main recently and that doc needs to get updated... clearly! :)

I'm going to invoke @lstein to look at this, as touching anything related to the runtime directory is highly brittle right now.

hipsterusername avatar Dec 14 '22 02:12 hipsterusername

Thanks for flagging this, @jli. Since you're using conda, I assume you've done a manual install? In that case, the easiest way to point to your enviroment is the INVOKEAI_ROOT environment variable or the --root CLI flag.

Your proposed solution will work if your conda environment is also located inside your InvokeAI runtime directory. But this will not be the case for every conda user (and is more likely to not be the case, I would say). Therefore setting your runtime directory location explicitly using one of the supported methods is the safest bet when working with a manual install.

Let me know please if that makes sense and please correct me if i'm wrong in any assumptions.

ebr avatar Dec 14 '22 06:12 ebr

Closing for now, but please feel to reopen if you have more ideas for enable root directory finding.

lstein avatar Dec 15 '22 14:12 lstein

This doesn't seem to work for me. I did the conda install, and whenever I try to run invokeAI via the command line, I get the same result:

$ invoke.py --web --root_dir "/path/to/my/runtime-dir/" --no-nsfw_checker
>> patchmatch.patch_match: INFO - Compiling and loading c extensions from "/opt/miniconda3/envs/invokeai/lib/python3.10/site-packages/patchmatch".
>> patchmatch.patch_match: ERROR - patchmatch failed to load or compile (Command 'make clean && make' returned non-zero exit status 2.).
>> patchmatch.patch_match: INFO - Refer to https://github.com/invoke-ai/InvokeAI/blob/main/docs/installation/INSTALL_PATCHMATCH.md for installation instructions.
>> Patchmatch not loaded (nonfatal)
* Initializing, be patient...
>> Initialization file /path/to/my/runtime-dir/invokeai.init found. Loading...
>> InvokeAI runtime directory is "/path/to/my/runtime-dir/"
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
   You appear to have a missing or misconfigured model file(s).                   
   The script will now exit and run configure_invokeai.py to help fix the problem.
   After reconfiguration is done, please relaunch invoke.py.                      
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

And then it launches into the configuration wizard again.

So every time I run invokeAI it just tries to run the configure_invokeai.py script again.

Will look into the code to see where the model file validation happens, there's clearly something not working correctly.

worldveil avatar Dec 20 '22 06:12 worldveil