InvokeAI
InvokeAI copied to clipboard
[bug]: Version 2.3.5.post1 Wrong Xformer version, switches to CPU
Is there an existing issue for this?
- [X] I have searched the existing issues
OS
Windows
GPU
cuda
VRAM
6gb
What version did you experience this issue on?
2.3.5.post1
What happened?
After updating (option 9-1), torch updated to 2.0.1 but Xformers mismatched. System switches to CPU without asking or being set in init.
Screenshots
Additional context
No response
Contact Details
No response
same issue here.
Full reinstall (delete folder and fresh with installer) fixed it, so it's a problem with the updater script.
@Void2258 is correct, running the installer script over my current installation corrects this xformers mismatch.
I've solve it by reinstall torch:
pip uninstall torch && pip install torch --index-url https://download.pytorch.org/whl/cu118
reinstalling torch didn't work for me... going to have to do full reinstall.
Hi, I have confirmed that the updater does not update optional dependencies when running on a remote zip file. I think we have bumped into a limitation in Pip here. I have updated the release notes and provide the following command-line recipe for those of you who have been left with a broken system:
- Start the launcher script and select option # 8 - Developer's console.
- Give the following command:
pip install invokeai[xformers] --use-pep517 --upgrade
This will bring xformers up to date, update to 2.3.5.post1, and get you up and running again. Apologies for the inconvenience!
Full reinstall (delete folder and fresh with installer) fixed it, so it's a problem with the updater script.
Folks, you do not have to delete the folder. You can either reinstall on top of it (only the libraries will be updated, no changes to your models or settings), or follow the recipe in the post above.
Also, this is only an issue if you have the old version of xformers installed. The updater works properly on the mandatory dependencies, seems not to recognize optional ones.
I think we have bumped into a limitation in Pip here.
Is this going to be an issue going forward?
Here is another recovery recipe, posted in discord by KatanaXS:
- Open the developer's console or use the command line to activate the invokeai environment.
- Give the following command:
pip install torch==2.0.0+cu118 torchvision==0.15.1+cu118 torchaudio==2.0.1 --index-url https://download.pytorch/ .org/whl/cu118
Note that this will not update Xformers. To install Xformers run the following additional command after the previous one completes successfully:
pip install xformers==0.0.19
I think we have bumped into a limitation in Pip here.
Is this going to be an issue going forward?
I will find a way around this. Frankly, the performance of torch 2.0 (on CUDA systems at least) is quite good and Xformers no longer provides as much as a performance boost as it used to.
Here is another recovery recipe, posted in discord by KatanaXS:
1. Open the developer's console or use the command line to activate the invokeai environment. 2. Give the following command:pip install torch==2.0.0+cu118 torchvision==0.15.1+cu118 torchaudio==2.0.1 --index-url https://download.pytorch/ .org/whl/cu118Note that this will not update Xformers. To install Xformers run the following additional command after the previous one completes successfully:
pip install xformers==0.0.19
Does InvokeAI uses torchvision and torchaudio?
It uses torchvision, but not torchaudio. However it doesn't hurt to install torchaudio and might be useful later when we provide support for full-frame surround-sound AI generated movies.
Lincoln
On Fri, May 19, 2023 at 2:11 PM fe-c @.***> wrote:
Here is another recovery recipe, posted in discord by KatanaXS:
Open the developer's console or use the command line to activate the invokeai environment.
Give the following command:
pip install torch==2.0.0+cu118 torchvision==0.15.1+cu118 torchaudio==2.0.1 --index-url https://download.pytorch/ .org/whl/cu118
Note that this will not update Xformers. To install Xformers run the following additional command after the previous one completes successfully:
pip install xformers==0.0.19
Does InvokeAI uses torchvision and torchaudio?
— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/3434#issuecomment-1555051320, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAA3EVJ6HQ5Z4OIE3KXVFJ3XG6ZVDANCNFSM6AAAAAAYHD656I . You are receiving this because you were assigned.Message ID: @.***>
might be useful later when we provide support for full-frame surround-sound AI generated movies.
Literally spat my coffee out (laughing). Thanks for that
Hi, I have confirmed that the updater does not update optional dependencies when running on a remote zip file. I think we have bumped into a limitation in Pip here. I have updated the release notes and provide the following command-line recipe for those of you who have been left with a broken system:
- Start the launcher script and select option # 8 - Developer's console.
- Give the following command:
pip install invokeai[xformers] --use-pep517 --upgradeThis will bring xformers up to date, update to
2.3.5.post1, and get you up and running again. Apologies for the inconvenience!
hey I have the issues as well, I have used the line you provided:
pip install invokeai[xformers] --use-pep517 --upgrade
and it was indeed needed, but the issue still persists, I tried running the second line you provided:
pip install torch==2.0.0+cu118 torchvision==0.15.1+cu118 torchaudio==2.0.1 --index-url https://download.pytorch/ .org/whl/cu118
but I get this error:
ERROR: Invalid requirement: '.org/whl/cu118'
it wouldn't update torch like that, just for reference I will provide the error I get before trying generate with invoke ai:
InvokeAI\.venv\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
Hi, I have confirmed that the updater does not update optional dependencies when running on a remote zip file. I think we have bumped into a limitation in Pip here. I have updated the release notes and provide the following command-line recipe for those of you who have been left with a broken system:
- Start the launcher script and select option # 8 - Developer's console.
- Give the following command:
pip install invokeai[xformers] --use-pep517 --upgradeThis will bring xformers up to date, update to
2.3.5.post1, and get you up and running again. Apologies for the inconvenience!hey I have the issues as well, I have used the line you provided:
pip install invokeai[xformers] --use-pep517 --upgradeand it was indeed needed, but the issue still persists, I tried running the second line you provided:
pip install torch==2.0.0+cu118 torchvision==0.15.1+cu118 torchaudio==2.0.1 --index-url https://download.pytorch/ .org/whl/cu118but I get this error:
ERROR: Invalid requirement: '.org/whl/cu118'it wouldn't update torch like that, just for reference I will provide the error I get before trying generate with invoke ai:
InvokeAI\.venv\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
It's typo in the index-url, it must be like this: https://download.pytorch.org/whl/cu118