InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: Version 2.3.5.post1 Wrong Xformer version, switches to CPU

Open Void2258 opened this issue 2 years ago • 15 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues

OS

Windows

GPU

cuda

VRAM

6gb

What version did you experience this issue on?

2.3.5.post1

What happened?

After updating (option 9-1), torch updated to 2.0.1 but Xformers mismatched. System switches to CPU without asking or being set in init.

Screenshots

image

image

Additional context

No response

Contact Details

No response

Void2258 avatar May 19 '23 00:05 Void2258

same issue here.

SauerCarey avatar May 19 '23 01:05 SauerCarey

Full reinstall (delete folder and fresh with installer) fixed it, so it's a problem with the updater script.

Void2258 avatar May 19 '23 03:05 Void2258

@Void2258 is correct, running the installer script over my current installation corrects this xformers mismatch.

Raecaug avatar May 19 '23 07:05 Raecaug

I've solve it by reinstall torch:

pip uninstall torch && pip install torch --index-url https://download.pytorch.org/whl/cu118

fe-c avatar May 19 '23 08:05 fe-c

reinstalling torch didn't work for me... going to have to do full reinstall.

SauerCarey avatar May 19 '23 12:05 SauerCarey

Hi, I have confirmed that the updater does not update optional dependencies when running on a remote zip file. I think we have bumped into a limitation in Pip here. I have updated the release notes and provide the following command-line recipe for those of you who have been left with a broken system:

  1. Start the launcher script and select option # 8 - Developer's console.
  2. Give the following command:
pip install invokeai[xformers] --use-pep517 --upgrade

This will bring xformers up to date, update to 2.3.5.post1, and get you up and running again. Apologies for the inconvenience!

lstein avatar May 19 '23 16:05 lstein

Full reinstall (delete folder and fresh with installer) fixed it, so it's a problem with the updater script.

Folks, you do not have to delete the folder. You can either reinstall on top of it (only the libraries will be updated, no changes to your models or settings), or follow the recipe in the post above.

Also, this is only an issue if you have the old version of xformers installed. The updater works properly on the mandatory dependencies, seems not to recognize optional ones.

lstein avatar May 19 '23 16:05 lstein

I think we have bumped into a limitation in Pip here.

Is this going to be an issue going forward?

Void2258 avatar May 19 '23 17:05 Void2258

Here is another recovery recipe, posted in discord by KatanaXS:

  1. Open the developer's console or use the command line to activate the invokeai environment.
  2. Give the following command:
pip install torch==2.0.0+cu118 torchvision==0.15.1+cu118 torchaudio==2.0.1 --index-url https://download.pytorch/ .org/whl/cu118

Note that this will not update Xformers. To install Xformers run the following additional command after the previous one completes successfully:

pip install xformers==0.0.19

lstein avatar May 19 '23 17:05 lstein

I think we have bumped into a limitation in Pip here.

Is this going to be an issue going forward?

I will find a way around this. Frankly, the performance of torch 2.0 (on CUDA systems at least) is quite good and Xformers no longer provides as much as a performance boost as it used to.

lstein avatar May 19 '23 18:05 lstein

Here is another recovery recipe, posted in discord by KatanaXS:

1. Open the developer's console or use the command line to activate the invokeai environment.

2. Give the following command:
pip install torch==2.0.0+cu118 torchvision==0.15.1+cu118 torchaudio==2.0.1 --index-url https://download.pytorch/ .org/whl/cu118

Note that this will not update Xformers. To install Xformers run the following additional command after the previous one completes successfully:

pip install xformers==0.0.19

Does InvokeAI uses torchvision and torchaudio?

fe-c avatar May 19 '23 18:05 fe-c

It uses torchvision, but not torchaudio. However it doesn't hurt to install torchaudio and might be useful later when we provide support for full-frame surround-sound AI generated movies.

Lincoln

On Fri, May 19, 2023 at 2:11 PM fe-c @.***> wrote:

Here is another recovery recipe, posted in discord by KatanaXS:

  1. Open the developer's console or use the command line to activate the invokeai environment.

  2. Give the following command:

pip install torch==2.0.0+cu118 torchvision==0.15.1+cu118 torchaudio==2.0.1 --index-url https://download.pytorch/ .org/whl/cu118

Note that this will not update Xformers. To install Xformers run the following additional command after the previous one completes successfully:

pip install xformers==0.0.19

Does InvokeAI uses torchvision and torchaudio?

— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/3434#issuecomment-1555051320, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAA3EVJ6HQ5Z4OIE3KXVFJ3XG6ZVDANCNFSM6AAAAAAYHD656I . You are receiving this because you were assigned.Message ID: @.***>

lstein avatar May 19 '23 19:05 lstein

might be useful later when we provide support for full-frame surround-sound AI generated movies.

Literally spat my coffee out (laughing). Thanks for that

hipsterusername avatar May 20 '23 14:05 hipsterusername

Hi, I have confirmed that the updater does not update optional dependencies when running on a remote zip file. I think we have bumped into a limitation in Pip here. I have updated the release notes and provide the following command-line recipe for those of you who have been left with a broken system:

  1. Start the launcher script and select option # 8 - Developer's console.
  2. Give the following command:
pip install invokeai[xformers] --use-pep517 --upgrade

This will bring xformers up to date, update to 2.3.5.post1, and get you up and running again. Apologies for the inconvenience!

hey I have the issues as well, I have used the line you provided:

pip install invokeai[xformers] --use-pep517 --upgrade

and it was indeed needed, but the issue still persists, I tried running the second line you provided:

pip install torch==2.0.0+cu118 torchvision==0.15.1+cu118 torchaudio==2.0.1 --index-url https://download.pytorch/ .org/whl/cu118

but I get this error:

ERROR: Invalid requirement: '.org/whl/cu118'

it wouldn't update torch like that, just for reference I will provide the error I get before trying generate with invoke ai:

InvokeAI\.venv\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).

TheSoulsKeeper avatar May 22 '23 07:05 TheSoulsKeeper

Hi, I have confirmed that the updater does not update optional dependencies when running on a remote zip file. I think we have bumped into a limitation in Pip here. I have updated the release notes and provide the following command-line recipe for those of you who have been left with a broken system:

  1. Start the launcher script and select option # 8 - Developer's console.
  2. Give the following command:
pip install invokeai[xformers] --use-pep517 --upgrade

This will bring xformers up to date, update to 2.3.5.post1, and get you up and running again. Apologies for the inconvenience!

hey I have the issues as well, I have used the line you provided:

pip install invokeai[xformers] --use-pep517 --upgrade

and it was indeed needed, but the issue still persists, I tried running the second line you provided:

pip install torch==2.0.0+cu118 torchvision==0.15.1+cu118 torchaudio==2.0.1 --index-url https://download.pytorch/ .org/whl/cu118

but I get this error:

ERROR: Invalid requirement: '.org/whl/cu118'

it wouldn't update torch like that, just for reference I will provide the error I get before trying generate with invoke ai:

InvokeAI\.venv\lib\site-packages\torchvision\transforms\functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).

It's typo in the index-url, it must be like this: https://download.pytorch.org/whl/cu118

fe-c avatar May 22 '23 07:05 fe-c