uv
uv copied to clipboard
Installation of torch from pytorch CPU index fails with 'no wheels are available with a matching Python ABI'
- A minimal code snippet that reproduces the bug.
uv pip install --verbose --index-url https://download.pytorch.org/whl/cpu torch torchvision torchaudio
- The command you invoked (e.g.,
uv pip sync requirements.txt), ideally including the--verboseflag.
$ docker run --rm -ti python:3.12 /bin/bash
root@4d525f9218ee:/# curl -LsSf https://astral.sh/uv/install.sh | sh
downloading uv 0.1.40 x86_64-unknown-linux-gnu
installing to /root/.cargo/bin
uv
everything's installed!
To add $HOME/.cargo/bin to your PATH, either restart your shell or run:
source $HOME/.cargo/env (sh, bash, zsh)
source $HOME/.cargo/env.fish (fish)
root@4d525f9218ee:/# . ~/.cargo/env
root@4d525f9218ee:/# uv venv
Using Python 3.12.3 interpreter at: usr/local/bin/python3
Creating virtualenv at: .venv
root@4d525f9218ee:/# . .venv/bin/activate
(.venv) root@4d525f9218ee:/# uv --version
uv 0.1.40
(.venv) root@4d525f9218ee:/# uv pip install --verbose --index-url https://download.pytorch.org/whl/cpu torch torchvision torchaudio &> uv_install.txt
(.venv) root@4d525f9218ee:/# uv pip list # nothing
(.venv) root@4d525f9218ee:/# uv pip install pip
Resolved 1 package in 198ms
Downloaded 1 package in 219ms
Installed 1 package in 10ms
+ pip==24.0
(.venv) root@4d525f9218ee:/# python -m pip install --verbose --index-url https://download.pytorch.org/whl/cpu torch torchvision torchaudio &> pip_install.txt
(.venv) root@4d525f9218ee:/# python -m pip list # expected result
Package Version
----------------- ----------
filelock 3.13.1
fsspec 2024.2.0
Jinja2 3.1.3
MarkupSafe 2.1.5
mpmath 1.3.0
networkx 3.2.1
numpy 1.26.3
pillow 10.2.0
pip 24.0
sympy 1.12
torch 2.3.0+cpu
torchaudio 2.3.0+cpu
torchvision 0.18.0+cpu
typing_extensions 4.9.0
(.venv) root@4d525f9218ee:/# deactivate
root@4d525f9218ee:/# rm -rf .venv
root@4d525f9218ee:/# uv venv && . .venv/bin/activate
Using Python 3.12.3 interpreter at: usr/local/bin/python3
Creating virtualenv at: .venv
(.venv) root@4d525f9218ee:/# uv pip install --verbose torch torchvision torchaudio &> uv_pypi_install.txt
(.venv) root@4d525f9218ee:/# uv pip list # install works, but want the cpu version instead of this
Package Version
------------------------ ----------
filelock 3.14.0
fsspec 2024.3.1
jinja2 3.1.4
markupsafe 2.1.5
mpmath 1.3.0
networkx 3.3
numpy 1.26.4
nvidia-cublas-cu12 12.1.3.1
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu12 8.9.2.26
nvidia-cufft-cu12 11.0.2.54
nvidia-curand-cu12 10.3.2.106
nvidia-cusolver-cu12 11.4.5.107
nvidia-cusparse-cu12 12.1.0.106
nvidia-nccl-cu12 2.20.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.1.105
pillow 10.3.0
sympy 1.12
torch 2.3.0
torchaudio 2.3.0
torchvision 0.18.0
typing-extensions 4.11.0
(.venv) root@4d525f9218ee:/#
- The current uv platform.
Linux, though applies across platforms.
- The current uv version (
uv --version).
uv 0.1.40
Related Issues
- https://github.com/astral-sh/uv/issues/2777
Did this work in previous versions?
Did this work in previous versions?
I'm not sure about past uv releases. I'm encountering this for the first time while trying to migrate the CI to use uv in https://github.com/CoffeaTeam/coffea.
Ah ok, no prob. Mostly was wondering if it was “obviously a regression” from today’s release.
Mostly was wondering if it was “obviously a regression” from today’s release.
Not that I know of, but I can later tonight replicate with an older uv release to check.
Can you try instead using uv pip install --verbose --index-url https://download.pytorch.org/whl/cpu torch==2.3.0+cpu torchvision==0.18.0+cpu torchaudio==2.3.0+cpu? I can't reproduce this on ARM, but I think it differs on ARM vs. x86.
It's explained here: https://github.com/astral-sh/uv/issues/1497#issuecomment-2098896853
Can you try instead using
uv pip install --verbose --index-url https://download.pytorch.org/whl/cpu torch==2.3.0+cpu torchvision==0.18.0+cpu torchaudio==2.3.0+cpu?
Yeah, that works on x86 Linux
$ docker run --rm -ti -v /tmp:/tmp python:3.12 /bin/bash
root@9b29419d1e98:/# curl -LsSf https://astral.sh/uv/install.sh | sh
downloading uv 0.1.41 x86_64-unknown-linux-gnu
installing to /root/.cargo/bin
uv
everything's installed!
To add $HOME/.cargo/bin to your PATH, either restart your shell or run:
source $HOME/.cargo/env (sh, bash, zsh)
source $HOME/.cargo/env.fish (fish)
root@9b29419d1e98:/# . ~/.cargo/env
root@9b29419d1e98:/# uv venv
Using Python 3.12.3 interpreter at: usr/local/bin/python3
Creating virtualenv at: .venv
root@9b29419d1e98:/# . .venv/bin/activate
(.venv) root@9b29419d1e98:/# uv --version
uv 0.1.41
(.venv) root@9b29419d1e98:/# uv pip install --verbose --index-url https://download.pytorch.org/whl/cpu torch==2.3.0+cpu torchvision==0.18.0+cpu torchaudio==2.3.0+cpu &> /tmp/uv_install_cpu_moniker.txt
(.venv) root@9b29419d1e98:/# uv pip list
Package Version
----------------- ----------
filelock 3.13.1
fsspec 2024.2.0
jinja2 3.1.3
markupsafe 2.1.5
mpmath 1.3.0
networkx 3.2.1
numpy 1.26.3
pillow 10.2.0
sympy 1.12
torch 2.3.0+cpu
torchaudio 2.3.0+cpu
torchvision 0.18.0+cpu
typing-extensions 4.9.0
(.venv) root@9b29419d1e98:/#
It's explained here: #1497 (comment)
Huh. That is interesting. I take it that this isn't fully expected, even though there are known differences with regards to local version identifiers?
I haven't really dug into it. My guess is it relates to some unclear decisions around how PyTorch chooses to publish their wheels (e.g., some variants include +cpu while others do not).
Marking as compatibility. It's not a bug in uv per se (given our documented limitations) but I wish that it worked.
You can avoid the extra specificity on those depending on torch, but due to +cpu you can't use a semver range (like >=2.0.0) with torch:
uv pip install --index-url https://download.pytorch.org/whl/cpu torch==2.3.0+cpu torchvision torchaudio
NOTE: ARM64 needs to omit the +cpu or equivalent due to upstream inconsistency with packaging. That may be resolved in future as PyTorch maintainers are open to contributions to drop the +cpu local identifier.
~~TLDR~~ (unreliable, see gotcha)
TL;DR: ~~Either:~~
- Add the local identifier (
+cpu,+cu121, etc) suffix to each package (which mandates an explicit version?), and they must be installed all together to resolve correctly it seems. (UPDATE: Only the top-level dependency needs the suffix that others would depend upon) - ~~Use PyPi as the primary index, with PyTorch as your extra index (which
uvwill prioritize packages from), then to ensuretorchwithout a local identifier is resolvable from PyPi index you'll need--index-strategy unsafe-first-match, and it'll circle back to the PyTorch variant being successfully resolved~~ 🎉
# Must provide an explicit torch version that the other two depend on to resolve:
$ uv pip install --index-url https://download.pytorch.org/whl/cpu torch==2.3.0+cpu torchvision torchaudio
Resolved 13 packages in 3.87s
Installed 13 packages in 234ms
+ filelock==3.13.1
+ fsspec==2024.2.0
+ jinja2==3.1.3
+ markupsafe==2.1.5
+ mpmath==1.3.0
+ networkx==3.2.1
+ numpy==1.26.3
+ pillow==10.2.0
+ sympy==1.12
+ torch==2.3.0+cpu
+ torchaudio==2.3.0+cpu
+ torchvision==0.18.0+cpu
+ typing-extensions==4.9.0
NOTE: If you attempt to use >= for resolution, you must quote wrap it to avoid shell redirection (>) which creates a file (eg: =0.0.0+cpu); uv will not be aware of this to raise an error (like it would when you use quote wrapping):
$ uv pip install --extra-index-url https://download.pytorch.org/whl/cpu torch==2.3.0+cpu torchvision>=0.0.0+cpu 'torchaudio>=2.0.0+cpu'
error: Failed to parse `torchaudio>=2.0.0+cpu`
Caused by: Operator >= is incompatible with versions containing non-empty local segments (`+cpu`)
Resolution gotcha (cache affects selection?)
UPDATE: I am mistaken with the --index-strategy approach to resolve torch. While uv would happily resolve with this approach, the actual torch package selected seems to be chosen based on local cache as well:
# Failed to resolve (_related to prior discussions above with the `+cpu` target_)
$ uv pip install --index-strategy unsafe-first-match --extra-index-url https://download.pytorch.org/whl/cpu torch torchvision torchaudio
# Installed torch (PyPi) while torchvision + torchaudio were `+cu121` (PyTorch)...
$ uv pip install --index-strategy unsafe-first-match --extra-index-url https://download.pytorch.org/whl/cu121 torch torchvision torchaudio
# ...
+ torch==2.3.0
+ torchaudio==2.3.0+cu121
+ torchvision==0.18.0+cu121
# Install the cuda 12.1 version from PyTorch adding it to cache:
$ uv pip install --index-strategy unsafe-first-match --extra-index-url https://download.pytorch.org/whl/cu121 torch==2.3.0+cu121 torchvision torchaudio
- torch==2.3.0
+ torch==2.3.0+cu121
# Install again, but in a new venv (this time install without the `-cu121` suffix again):
$ uv pip install --index-strategy unsafe-first-match --extra-index-url https://download.pytorch.org/whl/cu121 torch torchvision torchaudio
+ torch==2.3.0+cu121
+ torchaudio==2.3.0+cu121
+ torchvision==0.18.0+cu121
As can be seen above different resolution due to previous actions, now the cuda variant from PyTorch was installed instead of the PyPi torch pacakge
Original response
$ uv pip install torch
Resolved 21 packages in 3.35s
Downloaded 21 packages in 1m 03s
Installed 21 packages in 432ms
+ filelock==3.13.1
+ fsspec==2024.2.0
+ jinja2==3.1.3
+ markupsafe==2.1.5
+ mpmath==1.3.0
+ networkx==3.2.1
+ nvidia-cublas-cu12==12.1.3.1
+ nvidia-cuda-cupti-cu12==12.1.105
+ nvidia-cuda-nvrtc-cu12==12.1.105
+ nvidia-cuda-runtime-cu12==12.1.105
+ nvidia-cudnn-cu12==8.9.2.26
+ nvidia-cufft-cu12==11.0.2.54
+ nvidia-curand-cu12==10.3.2.106
+ nvidia-cusolver-cu12==11.4.5.107
+ nvidia-cusparse-cu12==12.1.0.106
+ nvidia-nccl-cu12==2.20.5
+ nvidia-nvjitlink-cu12==12.1.105
+ nvidia-nvtx-cu12==12.1.105
+ sympy==1.12
+ torch==2.3.0+cu121
+ typing-extensions==4.9.0
$ uv pip list
Package Version
------------------------ -----------
filelock 3.13.1
fsspec 2024.2.0
jinja2 3.1.3
markupsafe 2.1.5
mpmath 1.3.0
networkx 3.2.1
nvidia-cublas-cu12 12.1.3.1
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu12 8.9.2.26
nvidia-cufft-cu12 11.0.2.54
nvidia-curand-cu12 10.3.2.106
nvidia-cusolver-cu12 11.4.5.107
nvidia-cusparse-cu12 12.1.0.106
nvidia-nccl-cu12 2.20.5
nvidia-nvjitlink-cu12 12.1.105
nvidia-nvtx-cu12 12.1.105
sympy 1.12
torch 2.3.0+cu121
typing-extensions 4.9.0
So that installed with torch resolved to torch 2.3.0+cu121, yet when trying to add torchaudio or the more specific torchaudio==2.3.0+cu121 it fails:
$ uv pip install torchaudio==2.3.0+cu121
× No solution found when resolving dependencies:
╰─▶ Because there is no version of torch==2.3.0 and torchaudio==2.3.0+cu121 depends on torch==2.3.0, we can conclude that torchaudio==2.3.0+cu121 cannot be used.
And because you require torchaudio==2.3.0+cu121, we can conclude that the requirements are unsatisfiable.
Meanwhile, like with the suggested +cpu fix before my comment, the equivalent does resolve correctly:
$ uv pip install --index-url https://download.pytorch.org/whl/cu121 torch==2.3.0+cu121 torchaudio==2.3.0+cu121
Resolved 23 packages in 3.98s
Downloaded 4 packages in 29.27s
Installed 23 packages in 339ms
+ filelock==3.13.1
+ fsspec==2024.2.0
+ jinja2==3.1.3
+ markupsafe==2.1.5
+ mpmath==1.3.0
+ networkx==3.2.1
+ nvidia-cublas-cu12==12.1.3.1
+ nvidia-cuda-cupti-cu12==12.1.105
+ nvidia-cuda-nvrtc-cu12==12.1.105
+ nvidia-cuda-runtime-cu12==12.1.105
+ nvidia-cudnn-cu12==8.9.2.26
+ nvidia-cufft-cu12==11.0.2.54
+ nvidia-curand-cu12==10.3.2.106
+ nvidia-cusolver-cu12==11.4.5.107
+ nvidia-cusparse-cu12==12.1.0.106
+ nvidia-nccl-cu12==2.20.5
+ nvidia-nvjitlink-cu12==12.1.105
+ nvidia-nvtx-cu12==12.1.105
+ sympy==1.12
+ torch==2.3.0+cu121
+ torchaudio==2.3.0+cu121
+ triton==2.3.0
+ typing-extensions==4.9.0
So there is some issue there with uv resolving torch?
- Even after it resolves and installs it separately as
torch==2.3.0+cu121, it can only resolve with the explicittorchaudio==2.3.0+cu121at the same time, not as a 2nd install. - While
torch torchaudiowithout the+cu121suffix fails to resolve.
Definitely seems like some inconsistency with uv?
EDIT: Oh I see the linked issue references this gotcha (local identifiers support) with uv, and specifically cites PyTorch as an example.
So by setting it as an extra index URL instead, the PyTorch index will be preferred by uv, but you need the unsafe-first-match strategy so that it can find/resolve the torch package available at PyPi (since PyTorch doesn't provide it for an index focused on only that "local identifier" variant), then uv will resolve it successfully and still prefer the PyTorch package anyway 🤷♂️
$ uv pip install \
--index-strategy unsafe-first-match \
--extra-index-url https://download.pytorch.org/whl/cu121 \
torch torchaudio
Resolved 23 packages in 3.37s
Installed 23 packages in 264ms
+ filelock==3.13.1
+ fsspec==2024.2.0
+ jinja2==3.1.3
+ markupsafe==2.1.5
+ mpmath==1.3.0
+ networkx==3.2.1
+ nvidia-cublas-cu12==12.1.3.1
+ nvidia-cuda-cupti-cu12==12.1.105
+ nvidia-cuda-nvrtc-cu12==12.1.105
+ nvidia-cuda-runtime-cu12==12.1.105
+ nvidia-cudnn-cu12==8.9.2.26
+ nvidia-cufft-cu12==11.0.2.54
+ nvidia-curand-cu12==10.3.2.106
+ nvidia-cusolver-cu12==11.4.5.107
+ nvidia-cusparse-cu12==12.1.0.106
+ nvidia-nccl-cu12==2.20.5
+ nvidia-nvjitlink-cu12==12.1.105
+ nvidia-nvtx-cu12==12.1.105
+ sympy==1.12
+ torch==2.3.0+cu121
+ torchaudio==2.3.0+cu121
+ triton==2.3.0
+ typing-extensions==4.9.0
If you of course remove the extra index URL for PyTorch, then it'll resolve the standard torch==2.3.0 + torchaudio==2.3.0 packages at PyPi and install those like you'd expect.
As long as the package is known to exist at PyTorch, it should always be preferred this way, even if there were a malicious version on PyPi from what I understand? Once uv supports the feature to lock the index to PyTorch for these specific packages that may help, but I assume that wouldn't help drop the index strategy (it may even not be able to resolve the PyPi torch package just so it can circle back to PyTorch?).
Probably better to be explicit about the local identifier though, I am new to Python and was referencing someone elses pip install where the local identifier was implicit from the --extra-index-url (a variable during builds to support the PyTorch variants).
I am having almost the same problem, but the issue is
-
I am using a
requirements.txt -
that
requirements.txtincludes libraries that depend on torch="2.*", ( e.g, transformers ) -
Even if I install torch cpu using uv pip install torch=2.1.2+cpu, then try to install the
requirements.txtwith pypi index, uv resolve the dependencies oftorchin pypip which are are nvidia-cuda deps on linux x86 , to note , it doesn't resolve torch itself again, So i end up with torch+cpu but with torch cuda deps installed, which massively bloats the images size
Unfortunately that's not enough information for me to fully understand the issue, but you should consider using a constraints file in your second install, with torch=2.1.2+cpu? That would ensure that we respect the already-installed version during resolution.
Sadly specifying +cpu in constraint doesn't work currently in uv here an example
requirements.txt
easyocr==1.7.1
torch=2.1.*
constraint.txt
torch==2.1.2+cpu
torchvision==0.16.2+cpu
when we compile the requirements to check what uv is going to resolve by default without constraints
other packages
....
torch==2.1.2
# via
# easyocr
# torchvision
torchvision==0.16.2
# via easyocr
...
running the command to install with torch cpu index
uv pip install -r requirements.txt -c constraint.txt --extra-index-url "https://pypi.org/simple https://download.pytorch.org/whl/cpu"
we get
× No solution found when resolving dependencies:
╰─▶ Because there is no version of torch==2.1.2+cpu and you require torch==2.1.2+cpu, we can conclude that the requirements are
unsatisfiable.
uv doesn't qualify 2.1.2+cpu as 2.1.* as it is not semver compliant ?
Thanks, I’ll take a look when I can. The PyTorch stuff is always tricky.
Yeah, pytorch does things their own way and are not compliant with any standard :// , they are big enough to gey away with it I would be glad to contribute if you can point me to the relevant parts where uv resolves the dependency tree for requirements
On Sat, Jun 1, 2024, 16:24 Charlie Marsh @.***> wrote:
Thanks, I’ll take a look when I can. The PyTorch stuff is always tricky.
— Reply to this email directly, view it on GitHub https://github.com/astral-sh/uv/issues/3437#issuecomment-2143466601, or unsubscribe https://github.com/notifications/unsubscribe-auth/AH4SAFWSOL6VXDXXU76ZLFLZFHKQTAVCNFSM6AAAAABHLV4XW6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNBTGQ3DMNRQGE . You are receiving this because you commented.Message ID: @.***>
Streamlit uses uv to install dependencies from a requirements.txt file which caused our app to fail. I managed to work around it by pinning the version number as suggested here.
--extra-index-url https://download.pytorch.org/whl/cpu
torch==2.3.0+cpu
torchvision
torchaudio