localGPT icon indicating copy to clipboard operation
localGPT copied to clipboard

Getting "AssertionError: Torch not compiled with CUDA enabled"

Open shecky2000 opened this issue 1 year ago • 8 comments

Then the program ends.

Ideas?

shecky2000 avatar Jun 10 '23 01:06 shecky2000

Is NVIDIA CUDA installed on your system? I'm working through this at the moment, and encountered the same issue to discover that the CUDA toolkit needed to be installed.

StewartAragon avatar Jun 10 '23 02:06 StewartAragon

check if you have nvidia gpu computing toolking in your program files, and check the version in the folder. you'll need to reinstall the torch version compatible with it. like if you have v11.7, you can install it by: pip install torch==2.0.0+cu117 -f https://download.pytorch.org/whl/torch_stable.html otherwise you have to manually install the cuda toolkit. I had the same problem just now, and I've just finished ingesting my files. fingers crossed that this thing works. I had some issues w/ privategpt, so this ones my last attempt at creating a local solution.

Luisjesch avatar Jun 10 '23 03:06 Luisjesch

I can confirm my versions of CUDA (12.1) and the attempted Torch install (2.0.1) are compatible.

It's a 10-day-old computer with NVIDIA Geoforce RTX 3060 with 12GB.

To make sure I wasn't doing something out-of-scope that was causing problems, I wiped the computer and did a full Windows 11 factory reinstall. Then did the following:

Installed CUDA Tools Installed Anaconda Installed Python installed VS Code with c+ downloaded repo and unzipped into folder installed requirements (no errors or red text) ran python ingest.py

During the download-->install sequence, I encountered the same error. Tests:

ran torch.cuda.is_available() in python, which returned False. ran torch.cuda.get_arch_list() in python, which returned []. checked NVIDIA control panel to see the driver version (531.14) checked on the NVIDIA site to see if this driver is compatible with CUDA 12.1.x (it is)

Ideas & thank you for reading.

shecky2000 avatar Jun 10 '23 13:06 shecky2000

@Luisjesch

I can't thank you enough. This seems to have worked!

shecky2000 avatar Jun 10 '23 17:06 shecky2000

@shecky2000 Hey, can you share the command that worked for you, please?

I'm facing the same issue on my 4090 with 64GB ram. Trying to run localGPT from nix config with venv in WSL2 Ubuntu. I installed Cuda toolkit in Ubuntu, version 12.1 and torch is 2.0.1

When I try, pip install torch==2.0.1+cu121 -f https://download.pytorch.org/whl/torch_stable.html

I get, no version found error.

ERROR: Could not find a version that satisfies the requirement torch==2.0.1+cu121 (from versions: 1.11.0, 1.11.0+cpu, 1.11.0+cu102, 1.11.0+cu113, 1.11.0+cu115, 1.11.0+rocm4.3.1, 1.11.0+rocm4.5.2, 1.12.0, 1.12.0+cpu, 1.12.0+cu102, 1.12.0+cu113, 1.12.0+cu116, 1.12.0+rocm5.0, 1.12.0+rocm5.1.1, 1.12.1, 1.12.1+cpu, 1.12.1+cu102, 1.12.1+cu113, 1.12.1+cu116, 1.12.1+rocm5.0, 1.12.1+rocm5.1.1, 1.13.0, 1.13.0+cpu, 1.13.0+cu116, 1.13.0+cu117, 1.13.0+cu117.with.pypi.cudnn, 1.13.0+rocm5.1.1, 1.13.0+rocm5.2, 1.13.1, 1.13.1+cpu, 1.13.1+cu116, 1.13.1+cu117, 1.13.1+cu117.with.pypi.cudnn, 1.13.1+rocm5.1.1, 1.13.1+rocm5.2, 2.0.0, 2.0.0+cpu, 2.0.0+cpu.cxx11.abi, 2.0.0+cu117, 2.0.0+cu117.with.pypi.cudnn, 2.0.0+cu118, 2.0.0+rocm5.3, 2.0.0+rocm5.4.2, 2.0.1, 2.0.1+cpu, 2.0.1+cpu.cxx11.abi, 2.0.1+cu117, 2.0.1+cu117.with.pypi.cudnn, 2.0.1+cu118, 2.0.1+rocm5.3, 2.0.1+rocm5.4.2)
ERROR: No matching distribution found for torch==2.0.1+cu121

Tried using a different version, pip install torch==2.0.1+cu118 -f https://download.pytorch.org/whl/torch_stable.html But this is not solving the issue.

Any help is appreciated.

Shouri14 avatar Jun 13 '23 05:06 Shouri14

@shecky2000 Hey, can you share the command that worked for you, please?

I'm facing the same issue on my 4090 with 64GB ram. Trying to run localGPT from nix config with venv in WSL2 Ubuntu. I installed Cuda toolkit in Ubuntu, version 12.1 and torch is 2.0.1

When I try, pip install torch==2.0.1+cu121 -f https://download.pytorch.org/whl/torch_stable.html

I get, no version found error.

ERROR: Could not find a version that satisfies the requirement torch==2.0.1+cu121 (from versions: 1.11.0, 1.11.0+cpu, 1.11.0+cu102, 1.11.0+cu113, 1.11.0+cu115, 1.11.0+rocm4.3.1, 1.11.0+rocm4.5.2, 1.12.0, 1.12.0+cpu, 1.12.0+cu102, 1.12.0+cu113, 1.12.0+cu116, 1.12.0+rocm5.0, 1.12.0+rocm5.1.1, 1.12.1, 1.12.1+cpu, 1.12.1+cu102, 1.12.1+cu113, 1.12.1+cu116, 1.12.1+rocm5.0, 1.12.1+rocm5.1.1, 1.13.0, 1.13.0+cpu, 1.13.0+cu116, 1.13.0+cu117, 1.13.0+cu117.with.pypi.cudnn, 1.13.0+rocm5.1.1, 1.13.0+rocm5.2, 1.13.1, 1.13.1+cpu, 1.13.1+cu116, 1.13.1+cu117, 1.13.1+cu117.with.pypi.cudnn, 1.13.1+rocm5.1.1, 1.13.1+rocm5.2, 2.0.0, 2.0.0+cpu, 2.0.0+cpu.cxx11.abi, 2.0.0+cu117, 2.0.0+cu117.with.pypi.cudnn, 2.0.0+cu118, 2.0.0+rocm5.3, 2.0.0+rocm5.4.2, 2.0.1, 2.0.1+cpu, 2.0.1+cpu.cxx11.abi, 2.0.1+cu117, 2.0.1+cu117.with.pypi.cudnn, 2.0.1+cu118, 2.0.1+rocm5.3, 2.0.1+rocm5.4.2)
ERROR: No matching distribution found for torch==2.0.1+cu121

Tried using a different version, pip install torch==2.0.1+cu118 -f https://download.pytorch.org/whl/torch_stable.html But this is not solving the issue.

Any help is appreciated.

The CUDA/Torch thing is such an issue that I ended up wiping my Windows 11 computer and installing a pre-built stack from Lamda Labs. Steps:

  1. Install Ubuntu
  2. Use the command at this website https://lambdalabs.com/lambda-stack-deep-learning-software
  3. When you reboot, do the MOS key thing when it starts back up (a menu will appear)
  4. Do the following test in Python:

import torch torch.cuda.is_available()

If you get "true", you've succeeded.

shecky2000 avatar Jun 13 '23 15:06 shecky2000

So is localGPT not installable on a Mac? Nvidia does not support CUDA on Mac.

heaversm avatar Jun 13 '23 17:06 heaversm

@heaversm --device_type cpu?

endolith avatar Jun 16 '23 14:06 endolith

"The detected CUDA version (12.1) mismatches the version that was used to compile PyTorch (11.7). Please make sure to use the same CUDA versions."

Is there options availble without reinstalling the CUDA to an older version ?

rishithellathmeethal avatar Jul 26 '23 07:07 rishithellathmeethal

"The detected CUDA version (12.1) mismatches the version that was used to compile PyTorch (11.7). Please make sure to use the same CUDA versions."

Is there options availble without reinstalling the CUDA to an older version ?

I had success yesterday with 12.2 using pytorch 2.0.1+cu117 in both ubuntu and windows cross boot system.

I recommend doing fresh installs with both of these

ssimpson91 avatar Jul 26 '23 15:07 ssimpson91

So is localGPT not installable on a Mac? Nvidia does not support CUDA on Mac.

looks into running on mps instead of cpu

ssimpson91 avatar Jul 26 '23 15:07 ssimpson91