docs: Add PyTorch installation guide
Hello there! First real docs PR for uv.
- I expect this will be rewritten a gazillion times to have a consistent tone with the rest of the docs, despite me trying to stick to it as best as I could. Feel free to edit!
- I went super on the verbose mode, while also providing a callout with a TLDR on top. Scrap anything you feel it's redundant!
- I placed the guide under
integrationssince Charlie added the FastAPI integration there.
Summary
Addresses #5945
Test Plan
I just looked at the docs on the dev server of mkdocs if it looked nice.
I could not test the commands that I wrote work outside of macOS. If someone among contributors has a Windows/Linux laptop, it should be enough, even for the GPU-supported versions: I expect the installation will just break once torch checks for CUDA (perhaps even at runtime).
Have you considered including a way to specify package versions? This part is the difference between uv and pip. https://github.com/astral-sh/uv/blob/main/docs/pip/compatibility.md#local-version-identifiers
Have you considered including a way to specify package versions? This part is the difference between uv and pip.
main/docs/pip/compatibility.md#local-version-identifiers
Like uv add -- "torch==2.4.0+cpu"? Didn't think about that: having tried on macOS, it simply fails. I just went with "let's just port to uv the pip commands that the torch docs recommend". Any suggestion on how I could try that?
This will definitely require validation from folks on other platforms. Thanks for starting though!
Have you considered including a way to specify package versions? This part is the difference between uv and pip.
main/docs/pip/compatibility.md#local-version-identifiersLike
uv add -- "torch==2.4.0+cpu"? Didn't think about that: having tried on macOS, it simply fails. I just went with "let's just port to uv the pip commands that the torch docs recommend". Any suggestion on how I could try that?
On macOS, packages are obtained directly from PyPI, thus eliminating the need for local version identifiers.
Therefore, only on macOS, we cannot test whether the usage within the project(uv add/remove) is correct.
I don't know how to correctly provide the PyTorch version with CUDA to UV.
Have you considered including a way to specify package versions? This part is the difference between uv and pip.
main/docs/pip/compatibility.md#local-version-identifiersLike
uv add -- "torch==2.4.0+cpu"? Didn't think about that: having tried on macOS, it simply fails. I just went with "let's just port to uv the pip commands that the torch docs recommend". Any suggestion on how I could try that?On macOS, packages are obtained directly from PyPI, thus eliminating the need for local version identifiers.
Therefore, only on macOS, we cannot test whether the usage within the project(uv add/remove) is correct. I don't know how to correctly provide the PyTorch version with CUDA to UV.
Exactly! That's what I wrote on the macOS section. Sorry if I wasn't being clear.
I realised I can test this out on Colab for Linux 😈
- Go on
colab.new - Run:
!curl -LsSf https://astral.sh/uv/install.sh | sh
!source $HOME/.cargo/env bash
!/root/.cargo/bin/uv --version # doesn't find it on the $PATH, I guess I should restart the shell? idk
!/root/.cargo/bin/uv venv
Then:
# installs GPU cu12
!/root/.cargo/bin/uv pip install -- torch # works
# fails
!/root/.cargo/bin/uv pip install -- "torch==2.4.0+cpu" # fails
# works
!/root/.cargo/bin/uv pip install --extra-index-url=https://download.pytorch.org/whl/cpu -- torch
!/root/.cargo/bin/uv pip install --extra-index-url=https://download.pytorch.org/whl/cu118 -- torch
!/root/.cargo/bin/uv pip install --extra-index-url=https://download.pytorch.org/whl/cu121 -- torch
!/root/.cargo/bin/uv pip install --extra-index-url=https://download.pytorch.org/whl/cu124 -- torch
Have uv considered adding the index-url to the pyproject.toml file when using uv add? After adding PyTorch, when I added another package from PyPI, the lock file modified PyTorch to be installed from PyPI. lockfile.zip command
uv add --extra-index-url=https://download.pytorch.org/whl/cu121 torch torchvision torchaudio --no-sync
uv add deep-translator
@FishAlchemist yes, it's on the roadmap https://github.com/astral-sh/uv/issues/171
@FishAlchemist yes, it's on the roadmap #171
@zanieb If it's not yet supported, it seems like we can't include the project API part in the PR's document. After all, when the source is not PyPI, the lock file might be unexpected.
@FishAlchemist yes, it's on the roadmap #171
@zanieb If it's not yet supported, it seems like we can't include the project API part in the PR's document. After all, when the source is not PyPI, the lock file might be unexpected.
Uh yeah, just pushed a commit to remove all the mentions to modifying pyproject.toml.
So:
- We might want to give this a spin on a Windows machine to make sure it works
- Given that currently there's no mechanism to bind a specific package to a specific source, the only thing that can be documented in the docs is to run
uv pip install --extra-index-url=...oruv add --extra-index-url=..., am I right?
@baggiponte I think there's no problem with downloading PyTorch using "uv pip install" on Windows. Although I've only run CUDA 12.1, I was able to do simple tests using the installation method provided by PyTorch, just with the difference of using "uv pip". For more complex tasks, I switched to Linux because my Windows computer has insufficient memory. As for the project, although the command can run on Windows, the locked file results are not what I expected.
For PyTorch, I still recommend including the specific version in the documentation. I remember seeing some issues in the past where problems only occurred when a specific version was specified, and I'm not sure if they have been fixed.
For more complex tasks, I switched to Linux because my Windows computer has insufficient memory.
Do you think I should try and/or cover some of those?
As for the project, although the command can run on Windows, the locked file results are not what I expected.
Uhm, I guess this deserves an issue of its own?
For PyTorch, I still recommend including the specific version in the documentation. I remember seeing some issues in the past where problems only occurred when a specific version was specified, and I'm not sure if they have been fixed.
Were those issues uv-related or just generic torch version problems? Because otherwise I would not be super inclined to add this kind of recommendation to the docs.
Unrelated: perhaps I could create a new repo and use github actions on various runners to see if everything works, if we need more complex installation tests.
@baggiponte If lock file doesn't have a mac wheel, I'm unsure if uv sync can successfully execute on a Mac.
Command (uv 0.3.3 (deea6025a 2024-08-23))
uv init torch_uv -p 3.10
# Remember to enter the directory
uv python pin 3.10
uv add --extra-index-url=https://download.pytorch.org/whl/cu121 torch --no-sync
Note: Create on windows 11 (x86-64)
Part of uv.lock for torch
[[package]]
name = "torch"
version = "2.4.0+cu121"
source = { registry = "https://download.pytorch.org/whl/cu121" }
dependencies = [
{ name = "filelock" },
{ name = "fsspec" },
{ name = "jinja2" },
{ name = "networkx" },
{ name = "nvidia-cublas-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cuda-cupti-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cuda-nvrtc-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cuda-runtime-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cudnn-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cufft-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-curand-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cusolver-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cusparse-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-nccl-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-nvtx-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "sympy" },
{ name = "triton", marker = "python_full_version < '3.13' and platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "typing-extensions" },
]
wheels = [
{ url = "https://download.pytorch.org/whl/cu121/torch-2.4.0%2Bcu121-cp310-cp310-linux_x86_64.whl", hash = "sha256:28bfba084dca52a06c465d7ad0f3cc372c35fc503f3eab881cc17a5fd82914e7" },
{ url = "https://download.pytorch.org/whl/cu121/torch-2.4.0%2Bcu121-cp310-cp310-win_amd64.whl", hash = "sha256:9244bdc160d701915ae03e14cc25c085aa11e30d711a0b64bef0ee427e04632c" },
{ url = "https://download.pytorch.org/whl/cu121/torch-2.4.0%2Bcu121-cp311-cp311-linux_x86_64.whl", hash = "sha256:a9fff32d365e0c74b6909480548b2e291314a204adb29b6bb6f2c6d33f8be26c" },
{ url = "https://download.pytorch.org/whl/cu121/torch-2.4.0%2Bcu121-cp311-cp311-win_amd64.whl", hash = "sha256:bada31485e04282b9f099da39b774484d3e4c431b7ea0df3663817295ae764e4" },
{ url = "https://download.pytorch.org/whl/cu121/torch-2.4.0%2Bcu121-cp312-cp312-linux_x86_64.whl", hash = "sha256:49ac55a6497ddd6d0cdd51b5ea27d8ebe20c9273077855e9c96eb0dc289f07c3" },
{ url = "https://download.pytorch.org/whl/cu121/torch-2.4.0%2Bcu121-cp312-cp312-win_amd64.whl", hash = "sha256:b5c27549daf5f3209da6e07607f2bb8d02712555734fcd8cd7a23703a6e7d639" },
]
According to the document:
uv.lock is a universal or cross-platform lockfile that captures the packages that would be installed across all possible Python markers such as operating system, architecture, and Python version.
If a uv.lock generated on Windows cannot be used on other platforms, then it is not a uv.lock as documented. Therefore, when the documentation mentions using the Project API and depending on PyTorch, the uv.lock should conform to the documentation's specifications. Or note that the generated lock file is not a universal file?
@baggiponte
As for describing the installation method for a specific version, it's because UV installs PyTorch from sources other than PyPI, and it requires not only the version number but also the local version identifiers.
Or, mentioning Local version identifiers in the document might be another way to help people understand how to install a specific version.
pip
uv pip
-
2.4.0
-
2.4.0+cu121
Hey there, was away for the weekend. Thank you very much for the explanation 😊 Will get back to this after work, later today.
In the meanwhile, to recap:
- I should investigate lockfiles generated by torch installation and document, at least to say that they might not be cross-platform, in this case.
- Cover the local version identifiers differences between pip and uv.
Did I get everything?
Thank you again for taking the time to steer me through this!
Hey there, was away for the weekend. Thank you very much for the explanation 😊 Will get back to this after work, later today.
In the meanwhile, to recap:
- I should investigate lockfiles generated by torch installation and document, at least to say that they might not be cross-platform, in this case.
- Cover the local version identifiers differences between pip and uv.
Did I get everything?
Thank you again for taking the time to steer me through this!
While this is generally correct, there's a potential issue when using uv add to install PyTorch. If not configured properly, uv add might overwrite your existing PyTorch installation with a version from PyPI that lacks CUDA support, even if you previously had a GPU-accelerated version. As I mentioned from this comment: https://github.com/astral-sh/uv/pull/6523#issuecomment-2307497266
Note: Since PyPI's PyTorch offers wheels for macOS, Linux, and Windows, if we switch the source to PyPI and remove the Local version identifiers, there will be no errors. However, the version will possibly switch from CUDA to CPU only.
~~Note: The Linux version of PyTorch CUDA 12.1 on PyPI already supports CUDA.~~
Note: The Linux version of PyTorch CUDA 12.4 on PyPI already supports CUDA.
You're welcome. I know how frustrating these issues can be, so I wanted to save other users some time. Providing good documentation is a great service to users, and I appreciate you taking the time to do so.
It's pretty annoying since somehow extra-index-url overwrites the default index when a package with the same name but some versions missing. uv simply does not look at the default index for the missing version.
@inflation there are details on that behavior in the documentation. Please don't complain about it in someone's pull request.
@inflation there are details on that behavior in the documentation. Please don't complain about it in someone's pull request.
This is precisely where it happens the most. Installing pytorch using the its index introduce the problem. pixi has a similar issue and contains a nice example and explanation.
Hello there! Sorry for disappearing, but as if it was not enough already, we got a sparkle of floods here too.
I edited a couple of things with the last commit I pushed.
- Since @albanD rightfully pointed out, I added a small callout to point to the relevant bits of the uv docs.
- I reworked a bit the TL;DR section to make it a bit more straightforward.
- I added a mention to #171
- I tried to explain there might be issues with the cross-compatible lockfile. I am not sure I explained correctly what @FishAlchemist meant, though. I guess what they mean is:
- If someone does
uv add torch --extra-index-url=... - Then
uv add foobar - Then the pytorch version might be replaced with the PyPI one? If so, how can I phrase this correctly?
- If someone does
@zanieb let me know if it makes sense, suggest edits or make them directly.
@baggiponte The primary issue with file locking is that the extra-index-url specified on the CLI is not written to pyproject.toml (Nor should it be written automatically). As a result, the next time you lock your dependencies, it won't remember to search the extra-index-url. Therefore, before adding PyTorch using the project API, it's recommended to manually add the extra-index-url to pyproject.toml instead of providing it on the CLI.
@baggiponte The primary issue with file locking is that the extra-index-url specified on the CLI is not written to pyproject.toml (Nor should it be written automatically). As a result, the next time you lock your dependencies, it won't remember to search the extra-index-url. Therefore, before adding PyTorch using the project API, it's recommended to manually add the extra-index-url to pyproject.toml instead of providing it on the CLI.
Makes perfect sense!
I guess it might be a good idea to mention that you should add [tool.uv.sources] to your pyproject. What do you think?
@baggiponte The primary issue with file locking is that the extra-index-url specified on the CLI is not written to pyproject.toml (Nor should it be written automatically). As a result, the next time you lock your dependencies, it won't remember to search the extra-index-url. Therefore, before adding PyTorch using the project API, it's recommended to manually add the extra-index-url to pyproject.toml instead of providing it on the CLI.
Makes perfect sense!
I guess it might be a good idea to mention that you should add
[tool.uv.sources]to your pyproject. What do you think?
According to the documentation, version 0.4.10, [tool.uv.sources] only supports these sources.
Therefore, using [tool.uv.sources] requires you to find the sources yourself.
I'm trying to use extra-index-url, but as a result, there are no macOS wheels available.
[tool.uv]
extra-index-url = ["https://download.pytorch.org/whl/cu121"]
I've yet to find a solution for using [tool.uv.sources] that can support Windows, Linux, macOS, and CUDA.
@baggiponte The primary issue with file locking is that the extra-index-url specified on the CLI is not written to pyproject.toml (Nor should it be written automatically). As a result, the next time you lock your dependencies, it won't remember to search the extra-index-url. Therefore, before adding PyTorch using the project API, it's recommended to manually add the extra-index-url to pyproject.toml instead of providing it on the CLI.
Makes perfect sense! I guess it might be a good idea to mention that you should add
[tool.uv.sources]to your pyproject. What do you think?According to the documentation, version 0.4.10, [tool.uv.sources] only supports these sources.
Therefore, using [tool.uv.sources] requires you to find the sources yourself.
I'm trying to use extra-index-url, but as a result, there are no macOS wheels available.
[tool.uv] extra-index-url = ["https://download.pytorch.org/whl/cu121"]I've yet to find a solution for using [tool.uv.sources] that can support Windows, Linux, macOS, and CUDA.
Very clear. Pushed another minor edit mentioning this. Would love to hear your feedback on the phrasing.
For example: suppose a Windows user runs uv add torch and then a Linux user runs uv sync to synchronise the lockfile. The Windows user will get CPU-only PyTorch, while the Linux user will get CUDA 12.1 PyTorch.
@baggiponte I'm a bit worried this could be confusing, since it assumes the user hasn't given PyTorch package indexes to uv anywhere. However, this thing was never explicitly mentioned in the narrative. I'm unsure whether it's a good idea to imply it
As for the [tool.uv.sources] issue, it's just that I personally don't know how to use it to make PyTorch's lock files cross-platform. Therefore, I'm not sure if the current uv can make PyTorch cross-platform through it. Therefore, it feels inappropriate that the document mentions the possibility of it not working.
@zanieb Can the current UV create cross-platform lockfiles for projects that depend on PyTorch? (Capable of running across Windows, Linux, and macOS, at least under the specified CUDA version)
I think this might be unblocked by #7481 or some of the work following that.
Hi everyone,
Since the release of uv 0.4.23, we might be able to start this work. Because I'm not very familiar with how to use the new features, I asked @charliermarsh on Discord about how to write a pyproject.toml for PyTorch that supports Windows CUDA 12.1 and Linux (from PyPI) in this version. I got two examples, and I see that the lockfile can support three major platforms, but I only asked about Windows and Linux. I'm not sure if I need to write more details about macOS.
This is the question I asked, and it's also a personal use case for me.
Conversation 1:
I need PyTorch 2.1.2, which requires CUDA 12.1 on my Windows system. On my Linux system, I need to install it from PyPI. I'm wondering how to write a pyproject.toml file to fulfill these requirements. Moreover, installing PyTorch from non-PyPI sources using uv requires specifying Local version identifiers. Therefore, the PyTorch version should be 2.1.2+cu121 on Windows and 2.1.2 on Linux.
https://discord.com/channels/1039017663004942429/1207998321562619954/1296492153576489030
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.10"
dependencies = [
"torch==2.1.2+cu121 ; platform_system == 'Windows'",
"torch==2.1.2 ; platform_system != 'Windows'",
]
[tool.uv.sources]
torch = [
{ index = "torch-cu121", marker = "platform_system == 'Windows'"},
{ index = "pypi", marker = "platform_system != 'Windows'"},
]
[[tool.uv.index]]
name = "pypi"
url = "https://pypi.org/simple"
[[tool.uv.index]]
name = "torch-cu121"
url = "https://download.pytorch.org/whl/cu121"
Conversation 2 (Continue conversation 1):
Thank you for providing me with the example. I originally thought that if no marker was matched, UV would go to PyPI to search, but I didn't expect that I needed to explicitly specify to go to PyPI.
https://discord.com/channels/1039017663004942429/1207998321562619954/1296500150600077322
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.10"
dependencies = [
"torch==2.1.2 ; platform_system != 'Windows'",
"torch==2.1.2+cu121 ; platform_system == 'Windows'",
]
[tool.uv.sources]
torch = [
{ index = "torch-cu121", marker = "platform_system == 'Windows'"},
]
[[tool.uv.index]]
name = "torch-cu121"
url = "https://download.pytorch.org/whl/cu121"
explicit = true
The above examples can be used as a reference for this PR.
Hello there! Saw the latest release with extreme joy! Might hop in the Discord to edge out the last details. I am not 100% sure I have the time to do this tonight; might be tomorrow or over the weekend. If anyone has the time, feel free to pick up where I left and bring this over the finish line.
Hi I am facing problem in macos for pytorch cpu installation (https://github.com/astral-sh/uv/issues/8358), is there any suggestions? Thanks!
Hello there! Finally have some time.
Conversation 2 (Continue conversation 1):
Thank you for providing me with the example. I originally thought that if no marker was matched, UV would go to PyPI to search, but I didn't expect that I needed to explicitly specify to go to PyPI.
discord.com/channels/1039017663004942429/1207998321562619954/1296500150600077322
[project] name = "project" version = "0.1.0" requires-python = ">=3.10" dependencies = [ "torch==2.1.2 ; platform_system != 'Windows'", "torch==2.1.2+cu121 ; platform_system == 'Windows'", ] [tool.uv.sources] torch = [ { index = "torch-cu121", marker = "platform_system == 'Windows'"}, ] [[tool.uv.index]] name = "torch-cu121" url = "https://download.pytorch.org/whl/cu121" explicit = trueThe above examples can be used as a reference for this PR.
This works for me on Python 3.10 and 3.11, but not 3.12. If I just do uv add torch, everything works as usual on all supported Python versions. I am on an ARM mac.
@baggiponte I noticed that PyPI's torch package is now using CUDA 12.4, but I'm unsure of the exact date and version when this change was made.
Therefore, only on macOS, we cannot test whether the usage within the project(uv add/remove) is correct. I don't know how to correctly provide the PyTorch version with CUDA to UV.
Therefore, using [tool.uv.sources] requires you to find the sources yourself.