InvokeAI
InvokeAI copied to clipboard
[bug]: Macos 13.1 with M1 chip - install script seems to grab x86_64 version of pytorch rathern than arm64
Is there an existing issue for this?
- [X] I have searched the existing issues
OS
macOS
GPU
mps
VRAM
No response
What happened?
Install new InvokeAI in custom directory. Already installed: Python 3.10.9 & pip 22.3.1.
Ran the install script twice, but both times on first run I enter '2' and get the following:
OSError: dlopen(/Users/masonjames/Sites/invokeai/.venv/lib/python3.10/site-packages/torch/lib/libtorch_global_deps.dylib, 0x000A): tried: '/Users/masonjames/Sites/invokeai/.venv/lib/python3.10/site-packages/torch/lib/libtorch_global_deps.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/masonjames/Sites/invokeai/.venv/lib/python3.10/site-packages/torch/lib/libtorch_global_deps.dylib' (no such file), '/Users/masonjames/Sites/invokeai/.venv/lib/python3.10/site-packages/torch/lib/libtorch_global_deps.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64'))
It seems like pytorch may be importing incorrect versions of these packages, but I'm well out of my depth to troubleshoot resolve.
If I'm posting this in the wrong place please point me in the right direction. Thanks so much.
Screenshots
No response
Additional context
No response
Contact Details
No response
I have similar problem with M1 PRO , ventura 13.0
My error is:
NotImplementedError: The operator 'aten::index.Tensor' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1
to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
I'm getting the exact same error on my M1 pro how do we make this a priority
@jlcases this is something that'll actually happen to everyone using pytorch on MPS - doing what that note says should 'fix' it. InvokeAI's invoke.sh
does this by default but if you're using another method you'll want to set that yourself. Either way, it's unrelated to the issue.
To everyone else: I seem to recall in the early days of homebrew on apple silicon, there were instructions to setup/add the x86_64 architecture so you could actually use homebrew packages that hadn't been compiled for the new architecture yet. I'm wondering if that is the origin of the issue, as I've tested InvokeAI with a 'normal' arm64/apple silicon homebrew install and things worked fine.
I have no problems on M1. But I know from the past that people had messed up TMUX sessions as well as env vars (like f.E. ARCHFLAGS), ran in rosettaterm, had x86 python installed instead of arm64, .... or maybe you have a old brew, and from there installed bash, then your bash shell would run in x86_64
Yes, this can have many reasons and be a pain in the ... to find, but I can promiss you that the problem is not invokeai.
try file $(which python3)
or python3 -c "import platform; print(platform.platform())"
to find out if python is built for arm64 and if it is running in arm64
And just to be 100% sure I just grabbed the latest installer, got this:
...
Collecting torch<1.13.0
Downloading torch-1.12.1-cp310-none-macosx_11_0_arm64.whl (49.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 49.1/49.1 MB 3.2 MB/s eta 0:00:00
Collecting torchvision<0.14.0
Downloading torchvision-0.13.1-cp310-cp310-macosx_11_0_arm64.whl (1.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 6.9 MB/s eta 0:00:00
...
I have similar problem with M1 PRO , ventura 13.0
My error is:
NotImplementedError: The operator 'aten::index.Tensor' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on pytorch/pytorch#77764. As a temporary fix, you can set the environment variable
PYTORCH_ENABLE_MPS_FALLBACK=1
to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
export PYTORCH_ENABLE_MPS_FALLBACK=1
before you run the script (but the invoke.sh has this implemented normally 🤔)
This is a thoughtful response. I will keep sleuthing.. I think what happened is that an old .bash_profile came over from my old macbook to this new one. Even though my installations are "fresh" this old file exists and with a bunch of unused aliases, etc. Thanks again for pointing me in the right direction.
can this be closed?
There has been no activity in this issue for 14 days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release.
I ran into the same problem. I also read some posts like this, but I found it didn't work. My solution is to uninstall the error packages, then install it again. You may need to repeat this behavior for a few times.