Supported torch versions?
I am getting ModuleNotFoundError: No module named 'torch' when installing the project on Python 3.11.3 with Torch 2.6.0. Made sure I wasn't crazy and checked pip.
Full console log:
(starvector) was@SKYNET-NODE:~/projects/python/scripts/star-vector$ pip install --upgrade pip
pip install -e .
Requirement already satisfied: pip in /home/was/.pyenv/versions/3.11.3/envs/starvector/lib/python3.11/site-packages (25.0.1)
Obtaining file:///home/was/projects/python/scripts/star-vector
Installing build dependencies ... done
Checking if build backend supports build_editable ... done
Getting requirements to build editable ... done
Preparing editable metadata (pyproject.toml) ... done
Collecting torch==2.5.1 (from starvector==1.0)
Using cached torch-2.5.1-cp311-cp311-manylinux1_x86_64.whl.metadata (28 kB)
Collecting torchvision==0.20.1 (from starvector==1.0)
Using cached torchvision-0.20.1-cp311-cp311-manylinux1_x86_64.whl.metadata (6.1 kB)
Collecting transformers==4.49.0 (from starvector==1.0)
Using cached transformers-4.49.0-py3-none-any.whl.metadata (44 kB)
Collecting tokenizers==0.21.1 (from starvector==1.0)
Using cached tokenizers-0.21.1-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.8 kB)
Collecting sentencepiece==0.2.0 (from starvector==1.0)
Using cached sentencepiece-0.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (7.7 kB)
Collecting accelerate (from starvector==1.0)
Using cached accelerate-1.5.2-py3-none-any.whl.metadata (19 kB)
Collecting pydantic==2.10 (from starvector==1.0)
Using cached pydantic-2.10.0-py3-none-any.whl.metadata (167 kB)
Collecting markdown2[all] (from starvector==1.0)
Using cached markdown2-2.5.3-py3-none-any.whl.metadata (2.1 kB)
Collecting numpy<2.0.0 (from starvector==1.0)
Using cached numpy-1.26.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)
Collecting scikit-learn==1.2.2 (from starvector==1.0)
Using cached scikit_learn-1.2.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB)
Collecting gradio==3.36.1 (from starvector==1.0)
Using cached gradio-3.36.1-py3-none-any.whl.metadata (15 kB)
Collecting gradio_client==0.2.9 (from starvector==1.0)
Using cached gradio_client-0.2.9-py3-none-any.whl.metadata (7.0 kB)
Collecting requests (from starvector==1.0)
Using cached requests-2.32.3-py3-none-any.whl.metadata (4.6 kB)
Collecting httpx==0.24.0 (from starvector==1.0)
Using cached httpx-0.24.0-py3-none-any.whl.metadata (8.1 kB)
Collecting uvicorn (from starvector==1.0)
Using cached uvicorn-0.34.0-py3-none-any.whl.metadata (6.5 kB)
Collecting fastapi (from starvector==1.0)
Using cached fastapi-0.115.11-py3-none-any.whl.metadata (27 kB)
Collecting svgpathtools==1.6.1 (from starvector==1.0)
Using cached svgpathtools-1.6.1-py2.py3-none-any.whl.metadata (22 kB)
Collecting seaborn==0.12.2 (from starvector==1.0)
Using cached seaborn-0.12.2-py3-none-any.whl.metadata (5.4 kB)
Collecting taming-transformers (from starvector==1.0)
Using cached taming_transformers-0.0.1-py3-none-any.whl.metadata (499 bytes)
Collecting lpips (from starvector==1.0)
Using cached lpips-0.1.4-py3-none-any.whl.metadata (10 kB)
Collecting cairosvg (from starvector==1.0)
Using cached CairoSVG-2.7.1-py3-none-any.whl.metadata (2.7 kB)
Collecting beautifulsoup4 (from starvector==1.0)
Using cached beautifulsoup4-4.13.3-py3-none-any.whl.metadata (3.8 kB)
Collecting webcolors (from starvector==1.0)
Using cached webcolors-24.11.1-py3-none-any.whl.metadata (2.2 kB)
Collecting tqdm (from starvector==1.0)
Using cached tqdm-4.67.1-py3-none-any.whl.metadata (57 kB)
Collecting omegaconf (from starvector==1.0)
Using cached omegaconf-2.3.0-py3-none-any.whl.metadata (3.9 kB)
Collecting open-clip-torch (from starvector==1.0)
Using cached open_clip_torch-2.31.0-py3-none-any.whl.metadata (31 kB)
Collecting noise (from starvector==1.0)
Using cached noise-1.2.2.zip (132 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting datasets (from starvector==1.0)
Using cached datasets-3.4.1-py3-none-any.whl.metadata (19 kB)
Collecting scikit-image (from starvector==1.0)
Using cached scikit_image-0.25.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (14 kB)
Collecting fairscale (from starvector==1.0)
Using cached fairscale-0.4.13.tar.gz (266 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Collecting lxml (from starvector==1.0)
Using cached lxml-5.3.1-cp311-cp311-manylinux_2_28_x86_64.whl.metadata (3.7 kB)
Collecting torch-fidelity (from starvector==1.0)
Using cached torch_fidelity-0.3.0-py3-none-any.whl.metadata (2.0 kB)
Collecting clip-openai (from starvector==1.0)
Using cached clip_openai-1.0.post20230121-py3-none-any.whl.metadata (544 bytes)
Collecting scipy==1.11.1 (from starvector==1.0)
Using cached scipy-1.11.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (59 kB)
Collecting sentence-transformers (from starvector==1.0)
Using cached sentence_transformers-3.4.1-py3-none-any.whl.metadata (10 kB)
Collecting reportlab (from starvector==1.0)
Using cached reportlab-4.3.1-py3-none-any.whl.metadata (1.7 kB)
Collecting svglib (from starvector==1.0)
Using cached svglib-1.5.1.tar.gz (913 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: Pillow in /home/was/.pyenv/versions/3.11.3/envs/starvector/lib/python3.11/site-packages (from starvector==1.0) (11.0.0)
Collecting protobuf (from starvector==1.0)
Using cached protobuf-6.30.1-cp39-abi3-manylinux2014_x86_64.whl.metadata (593 bytes)
Collecting openai (from starvector==1.0)
Using cached openai-1.68.0-py3-none-any.whl.metadata (25 kB)
Collecting flash_attn==2.7.3 (from starvector==1.0)
Using cached flash_attn-2.7.3.tar.gz (3.2 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [20 lines of output]
Traceback (most recent call last):
File "/home/was/.pyenv/versions/3.11.3/envs/starvector/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 389, in <module>
main()
File "/home/was/.pyenv/versions/3.11.3/envs/starvector/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 373, in main
json_out["return_val"] = hook(**hook_input["kwargs"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/was/.pyenv/versions/3.11.3/envs/starvector/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 143, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-1l0ayt_0/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 334, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-1l0ayt_0/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 304, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-1l0ayt_0/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 522, in run_setup
super().run_setup(setup_script=setup_script)
File "/tmp/pip-build-env-1l0ayt_0/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 320, in run_setup
exec(code, locals())
File "<string>", line 22, in <module>
ModuleNotFoundError: No module named 'torch'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
(starvector) was@SKYNET-NODE:~/projects/python/scripts/star-vector$ which torch
(starvector) was@SKYNET-NODE:~/projects/python/scripts/star-vector$ pip show torch
Name: torch
Version: 2.6.0+cu124
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: [email protected]
License: BSD-3-Clause
Location: /home/was/.pyenv/versions/3.11.3/envs/starvector/lib/python3.11/site-packages
Requires: filelock, fsspec, jinja2, networkx, nvidia-cublas-cu12, nvidia-cuda-cupti-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-runtime-cu12, nvidia-cudnn-cu12, nvidia-cufft-cu12, nvidia-curand-cu12, nvidia-cusolver-cu12, nvidia-cusparse-cu12, nvidia-cusparselt-cu12, nvidia-nccl-cu12, nvidia-nvjitlink-cu12, nvidia-nvtx-cu12, sympy, triton, typing-extensions
Required-by: torchaudio, torchvision
I got past this Torch issue but was encountered tons of other errors and gave up. There is a bunch of system packages required that you do not mention in your install directions. Got a bunch of them and conda packages installed, but still have issues, now with compiling. Is this really supposed to be for 3.11.3?
It seems to me like an error with flash-attention. can you try installing flash-attn separately? Checking this on my side, will update the installation steps
It seems to me like an error with flash-attention. can you try installing flash-attn separately? Checking this on my side, will update the installation steps
Yeah, I was missing a few system packages, and conda specific packages to start installing this.
System:
- nvcc - I did not have this installed correctly, so it couldn't find it at default path of
/usr/local/cuda/bin/nvccbecause my cuda had a version hyphen. - Because my Ubuntu had GCC 13, I had to install gcc-12 g++-12 and export their CC and CXX.
-
export CC=/usr/bin/gcc-12andexport CXX=/usr/bin/g++-12
-
Conda:
conda install -c conda-forge gfortran openblas lapack
Star Vector is now installed almost perfectly minus some torch conflicts: torchaudio 2.6.0 requires torch==2.6.0, but you have torch 2.5.1 which is incompatible. But that's probably to be expected.
Update:
Got around to starting to test, and there is a missing dep from install: deepspeed was missing.
How did you got past that issue? I'm having the same torch issue here
How did you got past that issue? I'm having the same torch issue here
Instead of the specific cuda versioned torches, I just installed via pip3 install torch torchvision torchaudio for CUDA 12.4 on Linux
I am working on a docker image right now, will create PR as soon as I have it
As promised, the PR is up and I’ve also published my pre-built Docker image for easier on-boarding and experimentation.
flash_attn no windows whl
https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.3/flash_attn-2.7.3+cu11torch2.5cxx11abiFALSE-cp311-cp311-win_amd64.whl
This is work for me. try to build flash-attention on current venv. git clone https://github.com/Dao-AILab/flash-attention.git cd flash-attention pip install . --no-build-isolation
flash_attnno windows whlhttps://github.com/Dao-AILab/flash-attention/releases/download/v2.7.3/flash_attn-2.7.3+cu11torch2.5cxx11abiFALSE-cp311-cp311-win_amd64.whl
It's working for me on WHL. Just my Ubuntu didn't come with right build tools. Was ahead by one version.
Hello,
I just cloned the repository and followed the instructions in docker/README.md. The build process went smoothly, with no issues.
docker build -t starvector:latest . (base)
[+] Building 197.8s (9/9) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 963B 0.0s
=> [internal] load metadata for docker.io/pytorch/pytorch:2.5.1-cuda12.4-cudnn9-devel 1.2s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/5] FROM docker.io/pytorch/pytorch:2.5.1-cuda12.4-cudnn9-devel@sha256:14611869895df612b7b07227d5925f30ec3cd6673bad58ce3d84ed107950e014 102.9s
=> => resolve docker.io/pytorch/pytorch:2.5.1-cuda12.4-cudnn9-devel@sha256:14611869895df612b7b07227d5925f30ec3cd6673bad58ce3d84ed107950e014 0.0s
=> => sha256:14611869895df612b7b07227d5925f30ec3cd6673bad58ce3d84ed107950e014 3.47kB / 3.47kB 0.0s
=> => sha256:7021d1b70935851c95c45ed18156980b5024eda29b99564429025ea04f5ec109 30.44MB / 30.44MB 1.0s
=> => sha256:0a7674e3e8fe69dcd7f1424fa29aa033b32c42269aab46cbe9818f8dd7154754 57.59MB / 57.59MB 1.1s
=> => sha256:05d1b981bb5be4f84e3d0608f8f391117805becc0c1a9062697a551fe0ef0fbe 20.42kB / 20.42kB 0.0s
=> => sha256:0d6448aff88945ea46a37cfe4330bdb0ada228268b80da6258a0fec63086f404 4.62MB / 4.62MB 0.7s
=> => sha256:b71b637b97c5efb435b9965058ad414f07afa99d320cf05e89f10441ec1becf4 185B / 185B 0.9s
=> => sha256:56dc8550293751a1604e97ac949cfae82ba20cb2a28e034737bafd7382559609 6.89kB / 6.89kB 1.0s
=> => extracting sha256:7021d1b70935851c95c45ed18156980b5024eda29b99564429025ea04f5ec109 0.5s
=> => sha256:ec6d5f6c9ed94d2ee2eeaf048d90242af638325f57696909f1737b3158d838cf 1.37GB / 1.37GB 36.3s
=> => sha256:47b8539d532f561cac6d7fb8ee2f46c902b66e4a60b103d19701829742a0d11e 64.05kB / 64.05kB 1.2s
=> => sha256:83525caeeb359731f869f1ee87a32acdfdd5efb8af4cab06d8f4fdcf1f317daa 1.52kB / 1.52kB 1.3s
=> => sha256:fd9cc1ad8dee47ca559003714d462f4eb79cb6315a2708927c240b84d022b55f 1.68kB / 1.68kB 1.3s
=> => sha256:8e79813a7b9d5784bb880ca2909887465549de5183411b24f6de72fab0802bcd 2.65GB / 2.65GB 59.3s
=> => sha256:312a542960e3345001fc709156a5139ff8a1d8cc21a51a50f83e87ec2982f579 88.86kB / 88.86kB 1.4s
=> => sha256:0acb777129a5e617c9f5e5d30f1b150f0e256c2acb33e957e4cdd94aa2442433 1.51MB / 1.51MB 1.6s
=> => extracting sha256:0d6448aff88945ea46a37cfe4330bdb0ada228268b80da6258a0fec63086f404 0.1s
=> => extracting sha256:0a7674e3e8fe69dcd7f1424fa29aa033b32c42269aab46cbe9818f8dd7154754 0.5s
=> => sha256:e725174e3835274f0ea48614a92b13f5fffdb013e06b70ed281cee26de7d3906 3.30GB / 3.30GB 66.8s
=> => extracting sha256:b71b637b97c5efb435b9965058ad414f07afa99d320cf05e89f10441ec1becf4 0.0s
=> => extracting sha256:56dc8550293751a1604e97ac949cfae82ba20cb2a28e034737bafd7382559609 0.0s
=> => extracting sha256:ec6d5f6c9ed94d2ee2eeaf048d90242af638325f57696909f1737b3158d838cf 7.8s
=> => sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 32B / 32B 36.7s
=> => sha256:3093b7e1cc2f912db0ec863fa18f5d2028788b117e23eb2fe00eca1643e80866 99B / 99B 36.8s
=> => extracting sha256:47b8539d532f561cac6d7fb8ee2f46c902b66e4a60b103d19701829742a0d11e 0.0s
=> => extracting sha256:fd9cc1ad8dee47ca559003714d462f4eb79cb6315a2708927c240b84d022b55f 0.0s
=> => extracting sha256:83525caeeb359731f869f1ee87a32acdfdd5efb8af4cab06d8f4fdcf1f317daa 0.0s
=> => extracting sha256:8e79813a7b9d5784bb880ca2909887465549de5183411b24f6de72fab0802bcd 17.3s
=> => extracting sha256:312a542960e3345001fc709156a5139ff8a1d8cc21a51a50f83e87ec2982f579 0.0s
=> => extracting sha256:0acb777129a5e617c9f5e5d30f1b150f0e256c2acb33e957e4cdd94aa2442433 0.0s
=> => extracting sha256:e725174e3835274f0ea48614a92b13f5fffdb013e06b70ed281cee26de7d3906 25.7s
=> => extracting sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 0.0s
=> => extracting sha256:3093b7e1cc2f912db0ec863fa18f5d2028788b117e23eb2fe00eca1643e80866 0.0s
=> [2/5] RUN apt-get update && apt-get install -y --no-install-recommends wget git vim build-essential libcairo2 cuda-compiler-12-4 libaio-dev && rm -rf /var/lib/apt/lists/* 9.2s
=> [3/5] RUN pip install --upgrade pip && pip install jupyter deepspeed && git clone https://github.com/joanrod/star-vector.git /tmp/star-vector && pip install /tmp/star-vector && rm -rf /tmp/star-vector 74.2s
=> [4/5] RUN apt-get purge -y --auto-remove git build-essential && apt-get clean && rm -rf /var/lib/apt/lists/* 1.3s
=> [5/5] WORKDIR /workspace 0.0s
=> exporting to image 8.9s
=> => exporting layers 8.9s
=> => writing image sha256:91b7f43012d2575ebce9f1daa02e746961f069f622600e50e3234e2a9812dda9 0.0s
=> => naming to docker.io/library/starvector:latest
However, I encountered an error when attempting to run the script quickstart.py:
FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/cuda/bin/nvcc'
Indeed, the nvcc file is nowhere to be found in the system:
root@e4dc8c164440:/# find ./ -name nvcc
root@e4dc8c164440:/#
Here is the directory listing for /usr/local/cuda/bin/:
root@e4dc8c164440:/# ls -l /usr/local/cuda/bin/
total 73540
-rwxr-xr-x 1 root root 112 Mar 16 2024 compute-sanitizer
-rwxr-xr-x 1 root root 17099536 Mar 15 2024 cuda-gdb
-rwxr-xr-x 1 root root 821456 Mar 15 2024 cuda-gdbserver
-rwxr-xr-x 1 root root 658944 Mar 15 2024 cuobjdump
-rwxr-xr-x 1 root root 3826 Mar 28 2024 ncu
-rwxr-xr-x 1 root root 3616 Mar 28 2024 ncu-ui
-rwxr-xr-x 1 root root 50679016 Mar 15 2024 nvdisasm
-rwxr-xr-x 1 root root 6026560 Mar 16 2024 nvprof
root@e4dc8c164440:/#
What did I miss? :)
Thanks.
EDIT:
I successfully installed nvcc manually using apt-get (following an apt update), which suggests that there might be something missing in the docker script.
After that, "quickstart.py" ran smoothly, but the outputted SVG was blank.
After spending a few hours, I found myself stuck. I'll give it another shot in a few days... or weeks.
CUDA is usually not provided with installs cause it's may require special versions on different dishes and OS. Usually it's a prerequisite before installing the package
@tuxfamily did you ever end up resolving this issue?
@tuxfamily did you ever end up resolving this issue?
@dsingal0 No, I gave up and haven't tried again since, sorry. But let me know if you find a workaround 😉
@tuxfamily I think I got it working, the problem I believe was the inference code itself. It works, but needs lot of postprocessing. Here's what worked for me: https://gist.github.com/dsingal0/16e8f093b8b0343ccf19555afdc70202
@tuxfamily I think I got it working, the problem I believe was the inference code itself. It works, but needs lot of postprocessing. Here's what worked for me: https://gist.github.com/dsingal0/16e8f093b8b0343ccf19555afdc70202
Remove your HF token buddy. I don't trust the net, especially now with bot scrapers looking for stuff like this 24/7
@WASasquatch thanks