InvokeAI
InvokeAI copied to clipboard
[bug]: MacOs Installer 2.2.4: Segmentation fault: 11 using the Browser-based UI
Is there an existing issue for this?
- [X] I have searched the existing issues
OS
macOS
GPU
mps
VRAM
128G
What happened?
I have downloaded the InvokeAI-installer-2.2.4-mac.zip and then installed using the defaults values.
I start invoke.sh using browser-based UI option. The server starts as usual. Then, as soon as I access the server with the browser, I got the following error:
/invoke.sh: line 24: 11785 Segmentation fault: 11 .venv/bin/python .venv/bin/invoke.py --web $*
The command-line works as expected.
NOTE: When I install manually the same version 2.2.4 using git clone and Conda, both the browser-based UI and the command-line work as expected.
attached are :
Log at 2022-12-11 11-24-01 PM.txt problem-report.txt
Screenshots
Do you want to generate images using the
- command-line
- browser-based UI
- open the developer console Please enter 1, 2, or 3: 2
Starting the InvokeAI browser-based UI..
patchmatch.patch_match: INFO - Compiling and loading c extensions from "/Users/ivano/invokeai/.venv/lib/python3.10/site-packages/patchmatch". patchmatch.patch_match: WARNING - patchmatch failed to load or compile. patchmatch.patch_match: WARNING - Refer to https://github.com/invoke-ai/InvokeAI/blob/main/docs/installation/INSTALL_PATCHMATCH.md for installation instructions. Patchmatch not loaded (nonfatal)
- Initializing, be patient...
Initialization file /Users/ivano/invokeai/invokeai.init found. Loading... InvokeAI runtime directory is "/Users/ivano/invokeai" GFPGAN Initialized CodeFormer Initialized ESRGAN Initialized Using device_type mps Initializing safety checker Current VRAM usage: 0.00G Scanning Model: stable-diffusion-1.5 Model Scanned. OK!! Loading stable-diffusion-1.5 from /Users/ivano/invokeai/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt | LatentDiffusion: Running in eps-prediction mode | DiffusionWrapper has 859.52 M params. | Making attention of type 'vanilla' with 512 in_channels | Working with z of shape (1, 4, 32, 32) = 4096 dimensions. | Making attention of type 'vanilla' with 512 in_channels | Using more accurate float32 precision | Loading VAE weights from: /Users/ivano/invokeai/models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt Model loaded in 5.49s Current embedding manager terms: * Setting Sampler to k_lms
- --web was specified, starting web server...
Initialization file /Users/ivano/invokeai/invokeai.init found. Loading... Started Invoke AI Web Server! Default host address now 127.0.0.1 (localhost). Use --host 0.0.0.0 to bind any address. Point your browser at http://127.0.0.1:9090 System config requested ./invoke.sh: line 24: 10899 Segmentation fault: 11 .venv/bin/python .venv/bin/invoke.py --web $*
Additional context
Mac Studio M1 Ultra Memory|: 128GB macOs: Ventura 13.0.1
Python 3.10.8
pip 22.3.1 protobuf 4.21.11 setuptools 65.4.1 wheel 0.37.1
PyTorch 1.12.1
Contact Details
No response
@i3oc9i I have the same problem!!
I have repeated the test with 2.2.4 official tag, and the issue is confirmed.
@Vargol can you see if you get the same issue on your Mac, please ?
same as #1888 ?
IMHO No because if you look in the log patch match is not installed...
My observation is that installing with conda or with virtualenv do not come to the same result. Indeed there are different dependencies versions.
When I install manually using the conda way, all work as expected.
I've tried to do the same (installer 2.2.4) and had the same issue:
./invoke.sh: line 24: 22103 Segmentation fault: 11 .venv/bin/python .venv/bin/invoke.py --web $*
And I can confirm that a manual installation works flawlessly.
Thank you so much for the bug reports. It does sound like this can be solved by using the conda dependencies for the installer dependencies. Could one of those of you who have the manual install working please do the following:
- Activate your environment (indicate whether it is a virtual environment or Conda)
- Run
pip list
and save it to file 'dependencies-working.txt' - Now use the
invoke.sh
command to enter the "developer's console" (which activates the 2.2.4 venv) - Run 'pip list' again and save it to file 'dependencies-broken.txt'
- Upload both files to this thread. I will then propose some fixes to try until we figure out what is causing the crash.
Now I'm busy at work, I will do this evening if person was able to do before
git clone from main and pip install -r requirements-mac-mps-cpu.txt worked fine on MacOS 13.0.1 / Python 3.10.8
Here's my working list. dependencies-working.txt
@varlog can you also test the offcial 2.2.4 installer ?
git clone from main and pip install -r requirements-mac-mps-cpu.txt worked fine on MacOS 13.0.1 / Python 3.10.8
Here's my working list. dependencies-working.txt @Vargol This is surprising to me, because the installer script does exactly the same thing that you did. I know this is extra work for you, but could you try the 2.2.4 installer, let it run to completion (you can skip the models downloading part), and then activate the
.venv
located in theinvokeai
runtime directory and do apip list
there as well? If the web GUI is crashing with one set of requirements and not with the other, we'll be close to tracking down the problem.
/opt/local/bin/python3.10: No module named pip
I would prefer it id the script only updated pip in the venv anyway. I've moved the pip update after the activation and setting of $PYTHON
The I I got errors due to spaces in the path.
/Volumes/Sabrent Media/Documents/Source/Python/iai224/invokeai ~/Downloads/InvokeAI-Installer
./install.sh: ./.venv/bin/configure_invokeai.py: /Volumes/Sabrent: bad interpreter: No such file or directory
Okay guess I need to use up more of my 256Gb internal drive....
Do you want to generate images using the
1. command-line
2. browser-based UI
3. open the developer console
Please enter 1, 2, or 3: 2
Starting the InvokeAI browser-based UI..
>> patchmatch.patch_match: INFO - Compiling and loading c extensions from "/Users/xxx/iai224/invokeai/.venv/lib/python3.10/site-packages/patchmatch".
>> patchmatch.patch_match: WARNING - patchmatch failed to load or compile.
>> patchmatch.patch_match: WARNING - Refer to https://github.com/invoke-ai/InvokeAI/blob/main/docs/installation/INSTALL_PATCHMATCH.md for installation instructions.
>> Patchmatch not loaded (nonfatal)
* Initializing, be patient...
>> Initialization file /Users/xxx/iai224/invokeai/invokeai.init found. Loading...
>> InvokeAI runtime directory is "/Users/xxx/iai224/invokeai"
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type mps
>> Current VRAM usage: 0.00G
>> Scanning Model: stable-diffusion-1.5
>> Model Scanned. OK!!
>> Loading stable-diffusion-1.5 from /Users/xxx/iai224/invokeai/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
>> Calculating sha256 hash of weights file
>> sha256 = cc6cb27103417325ff94f52b7a5d2dde45a7515b25c255d8e396c90014281516 (2.50s)
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
| Using more accurate float32 precision
| Loading VAE weights from: /Users/xxx/iai224/invokeai/models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
>> Model loaded in 70.48s
>> Current embedding manager terms: *
>> Setting Sampler to k_lms
* --web was specified, starting web server...
>> Initialization file /Users/xxx/iai224/invokeai/invokeai.init found. Loading...
>> Started Invoke AI Web Server!
>> Default host address now 127.0.0.1 (localhost). Use --host 0.0.0.0 to bind any address.
>> Point your browser at http://127.0.0.1:9090
>> System config requested
>> Image generation requested: {'prompt': 'a man waving over his shoulder', 'iterations': 1, 'steps': 50, 'cfg_scale': 7.5, 'threshold': 0, 'perlin': 0, 'height': 512, 'width': 512, 'sampler_name': 'k_lms', 'seed': 643416995, 'progress_images': False, 'progress_latents': True, 'save_intermediates': 5, 'generation_mode': 'txt2img', 'init_mask': '...', 'seamless': False, 'hires_fix': False, 'variation_amount': 0}
ESRGAN parameters: False
Facetool parameters: False
{'prompt': 'a man waving over his shoulder', 'iterations': 1, 'steps': 50, 'cfg_scale': 7.5, 'threshold': 0, 'perlin': 0, 'height': 512, 'width': 512, 'sampler_name': 'k_lms', 'seed': 643416995, 'progress_images': False, 'progress_latents': True, 'save_intermediates': 5, 'generation_mode': 'txt2img', 'init_mask': '', 'seamless': False, 'hires_fix': False, 'variation_amount': 0}
/Users/xxx/iai224/invokeai/.venv/lib/python3.10/site-packages/ldm/modules/embedding_manager.py:166: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.)
placeholder_rows, placeholder_cols = torch.where(
Generating: 0%| | 0/1 [00:00<?, ?it/s]>> Ksampler using model noise schedule (steps >= 30)
>> Sampling with k_lms starting at step 0 of 50 (50 new sampling steps)
100%|█████████████████████████████████████| 50/50 [03:20<00:00, 4.02s/it]
{'prompt': 'a man waving over his shoulder', 'iterations': 1, 'steps': 50, 'cfg_scale': 7.5, 'threshold': 0, 'perlin': 0, 'height': 512, 'width': 512, 'sampler_name': 'k_lms', 'seed': 643416995, 'progress_images': False, 'progress_latents': True, 'save_intermediates': 5, 'generation_mode': 'txt2img', 'init_mask': '', 'seamless': False, 'hires_fix': False, 'variation_amount': 0, 'init_img': ''}
>> Image generated: "/Users/xxx/iai224/invokeai/outputs/000001.1c789f65.643416995.png"
Generating: 100%|██████████████████████████| 1/1 [03:28<00:00, 208.79s/it]
>> Usage stats:
>> 1 image(s) generated in 209.23s
@Vargol, to summarize, after fixing the unrelated problems of whitespace in path names and pip not being installed, you were unable to reproduce the segfault issue when starting the web gui?
I will fix the pathname and pip issues in the installer. Does this code stanza look good to you? I've used the ensurepip
module to install pip, and then use the pip module to do an upgrade of itself. Not sure if the latter is needed.
#--------------------------------------------------------------------------------
echo
echo "** Creating Virtual Environment for InvokeAI **"
$PYTHON -mvenv "$ROOTDIR"/.venv
$PYTHON -mensurepip --upgrade
$PYTHON -mpip install --upgrade pip
_err_exit $? "Python failed to create virtual environment "$ROOTDIR"/.venv. Please see $TROUBLESHOOTING for help."
venv installs pip in the virtual environment somehow, all I did was move the
$PYTHON -mpip install --upgrade pip
line so its was after the activation
$PYTHON -mvenv $ROOTDIR/.venv
_err_exit $? "Python failed to create virtual environment $ROOTDIR/.venv. Please see $TROUBLESHOOTING for help."
#--------------------------------------------------------------------------------
echo
echo "** Activating Virtual Environment for InvokeAI **"
source $ROOTDIR/.venv/bin/activate
_err_exit $? "Failed to activate virtual evironment $ROOTDIR/.venv. Please see $TROUBLESHOOTING for help."
PYTHON=$ROOTDIR/.venv/bin/python
$PYTHON -mpip install --upgrade pip
Ok, easy fix then.
@Vargol so you were unable to reproduce the segfault after running the installer script?
Yep, thats right the installer created a working InvokeAI here.
So, only difference that should occur, assuming pip fetches the same versions is that I'm using the python installed by macports. Don't know if that is the incase for folk getting the failures.
@Vargol Thanks for your help. I've fixed the spaces and pip issues and will update the script on the release site now.
@i3oc9i When you have time, please pip list
the set of requirements from the failed install so that I can compare to @Vargol 's successful install and see if there's anything different. It might indeed come down to where Python was installed from.
.venv/bin/python -mpip list > depends_2.2.4_faulty.txt
I'm going to patch the installer 2.2.4 in the same way @Vargol did, to confirm if solve the issue.
When I apply the patch suggested by @varlog I still get the segmentation fault when the browser is started ...
.venv/bin/python -mpip list > depends_2.2.4_patched_faulty.txt
depends_2.2.4_patched_faulty.txt
differences between the patched installer and original one are minimals.
diff depends_2.2.4_*
88c88
< picklescan 0.0.5
---
> picklescan 0.0.6
90c90
< pip 22.2.2
---
> pip 22.3.1
135c135
< tb-nightly 2.12.0a20221210
---
> tb-nightly 2.12.0a20221212
The patch was to fix the installer, my python doesn't have have pip installed, but was worth trying as InvokeAI works without issues for me.
Try activating the venv and then running
.venv/bin/python -X faulthandler .venv/bin/invoke.py --web or .venv/bin/python -X faulthandler .venv/bin/invoke.py --web --root <your_root_dir> see if that prints out any interesting information.
That's too bad. We're going to have to look at python itself. Could you
compare the output of python -V
between the non-working 2.2.4 installer
.venv and the working conda-based install?
EDIT: Was composing this in a different window so didn't see @Vargol 's response. Now that I understand what the comparison was, it would still be helpful to activate the working conda environment and then do a pip list
so that we can compare the requirements in the working and non-working environments.
On Mon, Dec 12, 2022 at 1:45 PM Ivano Coltellacci @.***> wrote:
When I apply the patch suggested by @VarLog https://github.com/VarLog I still get the segmentation fault when the browser is started ...
.venv/bin/python -mpip list > depends_2.2.4_patched_faulty.txt
depends_2.2.4_patched_faulty.txt https://github.com/invoke-ai/InvokeAI/files/10211094/depends_2.2.4_patched_faulty.txt
differences between the patched installer and original one are minimals.
diff depends_2.2.4_* 88c88 < picklescan 0.0.5
picklescan 0.0.6 90c90 < pip 22.2.2
pip 22.3.1 135c135 < tb-nightly 2.12.0a20221210
tb-nightly 2.12.0a20221212
— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/1941#issuecomment-1347064082, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAA3EVNLMBZNN54F5S7FVLLWM5XE5ANCNFSM6AAAAAAS3IP4XQ . You are receiving this because you were assigned.Message ID: @.***>
--
Lincoln Stein
Head, Adaptive Oncology, OICR
Senior Principal Investigator, OICR
Professor, Department of Molecular Genetics, University of Toronto
Tel: 416-673-8514
Cell: 416-817-8240
@.***
E**xecutive Assistant
Michelle Xin
Tel: 647-260-7927
@.*** @.**>
Ontario Institute for Cancer Research
MaRS Centre, 661 University Avenue, Suite 510, Toronto, Ontario, Canada M5G 0A3
Collaborate. Translate. Change lives.
This message and any attachments may contain confidential and/or privileged information for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this message in error, please contact the sender and delete all copies. Opinions, conclusions or other information contained in this message may not be that of the organization.
I have installed Manually with the pip method and it works
git clone https://github.com/invoke-ai/InvokeAI.git ./invokeai_pip
cd invokeai_pip
python3 -mvenv .venv
source .venv/bin/activate
ln -sf ./environments-and-requirements/requirements-mac-mps-cpu.txt requirements.txt
pip install --prefer-binary -r requirements.txt
export PYTORCH_ENABLE_MPS_FALLBACK=1
python scripts/invoke.py --web
EDIT: the difference that I remak here is the --prefer-binary option
@i3oc9i Could you give this new version of the installer a try? It inserts --prefer-binary
into the pip
call, and also fixes the spaces-in-folder-names and no-pip-installed issues that @Vargol identified.
Thanks! InvokeAI-installer-2.2.4-mac.zip
@i3oc9i Could you give this new version of the installer a try? It inserts
--prefer-binary
into thepip
call, and also fixes the spaces-in-folder-names and no-pip-installed issues that @Vargol identified.Thanks! InvokeAI-installer-2.2.4-mac.zip
I allowed a 'resume an interrupted install?' and skipped model install, thereby I assume only over-writing with new changes.. I still get seg fault errors after installing this - sorry :-(
now trying 'clean' install with InvokeAI-installer from above..
@lstein I have just tried the new InvokeAI-installer-2.2.4-mac.zip and fails
@i3oc9i Thanks for this. The other issue worth exploring is whether the python on your system is the same as the python that the installer puts in your .venv. Could you try these two things?
- In the environment in which invokeai is working, give these four commands:
which python
which pip
python -V
pip -V
- In the environment in which invokeai is failing, give the same four commands.
Thanks!