rye icon indicating copy to clipboard operation
rye copied to clipboard

candidate is not for thsi requirement when installing jax[cuda12_pip]

Open Roger-luo opened this issue 2 years ago • 7 comments

Steps to Reproduce

first add the following field to pyproject.toml

[[tool.rye.sources]]
name = "google-jax"
url = "https://storage.googleapis.com/jax-releases/jax_cuda_releases.html"
type = "find-links"

then add jax with CUDA to dependency

rye add "jax[cuda12_pip]"

then run rye sync

Expected Result

this should just work as what jax README says for pip

Actual Result

somehow the option cuda12_pip is parsed as cuda12-pip,cuda12-pip and rye thinks they don't match.

Version Info

> rye --version
rye 0.11.0
commit: 0.11.0 (f6f63d6c1 2023-07-18)
platform: linux (x86_64)
self-python: [email protected]
symlink support: true

Stacktrace

Reusing already existing virtualenv
Generating production lockfile: /home/roger/code/python/duck_rg/requirements.lock
Traceback (most recent call last):
  File "/home/roger/.rye/pip-tools/[email protected]/bin/pip-compile", line 8, in <module>
    sys.exit(cli())
             ^^^^^
  File "/home/roger/.rye/pip-tools/[email protected]/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/roger/.rye/pip-tools/[email protected]/lib/python3.11/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/home/roger/.rye/pip-tools/[email protected]/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/roger/.rye/pip-tools/[email protected]/lib/python3.11/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/roger/.rye/pip-tools/[email protected]/lib/python3.11/site-packages/click/decorators.py", line 33, in new_func
    return f(get_current_context(), *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/roger/.rye/pip-tools/[email protected]/lib/python3.11/site-packages/piptools/scripts/compile.py", line 592, in cli
    results = resolver.resolve(max_rounds=max_rounds)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/roger/.rye/pip-tools/[email protected]/lib/python3.11/site-packages/piptools/resolver.py", line 593, in resolve
    is_resolved = self._do_resolve(
                  ^^^^^^^^^^^^^^^^^
  File "/home/roger/.rye/pip-tools/[email protected]/lib/python3.11/site-packages/piptools/resolver.py", line 625, in _do_resolve
    resolver.resolve(
  File "/home/roger/.rye/pip-tools/[email protected]/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 92, in resolve
    result = self._result = resolver.resolve(
                            ^^^^^^^^^^^^^^^^^
  File "/home/roger/.rye/pip-tools/[email protected]/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py", line 546, in resolve
    state = resolution.resolve(requirements, max_rounds=max_rounds)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/roger/.rye/pip-tools/[email protected]/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py", line 427, in resolve
    failure_causes = self._attempt_to_pin_criterion(name)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/roger/.rye/pip-tools/[email protected]/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py", line 249, in _attempt_to_pin_criterion
    satisfied = all(
                ^^^^
  File "/home/roger/.rye/pip-tools/[email protected]/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py", line 250, in <genexpr>
    self._p.is_satisfied_by(requirement=r, candidate=candidate)
  File "/home/roger/.rye/pip-tools/[email protected]/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/provider.py", line 240, in is_satisfied_by
    return requirement.is_satisfied_by(candidate)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/roger/.rye/pip-tools/[email protected]/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/requirements.py", line 83, in is_satisfied_by
    assert candidate.name == self.name, (
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Internal issue: Candidate is not for this requirement jax[cuda12-pip,cuda12-pip] vs jax[cuda12-pip]
error: could not write production lockfile for project

Caused by:
    failed to generate lockfile

Roger-luo avatar Aug 20 '23 17:08 Roger-luo

instead of writing something above, write the specification as jax[cuda12_pip,cuda12_pip] temporarily workarounds this issue

Roger-luo avatar Aug 20 '23 17:08 Roger-luo

Does the command pip install --upgrade "jax[cuda12_pip]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html install the right requirements?

CharlesChen0823 avatar Sep 14 '23 02:09 CharlesChen0823

yes since that's the suggested commend from jax README

Roger-luo avatar Sep 14 '23 17:09 Roger-luo

I have not a GPU, this command execute failed. Cannot debug this problem....

CharlesChen0823 avatar Sep 15 '23 01:09 CharlesChen0823

I'm completely at a loss here of what is happening.

mitsuhiko avatar Jan 21 '24 22:01 mitsuhiko

I just tried reproducing this issue, but I was able to install it without any issues.

[project]
name = "pytorch-test"
version = "0.1.0"
description = ""
authors = [{ name = "", email = "[email protected]" }]
dependencies = ["jax[cuda12_pip]>=0.4.23"]
requires-python = "==3.11.*"
readme = "README.md"
license = { text = "MIT" }

[[tool.rye.sources]]
name = "google-jax"
url = "https://storage.googleapis.com/jax-releases/jax_cuda_releases.html"
type = "find-links"
❯ rye add "jax[cuda12_pip]"
Added jax[cuda12_pip]>=0.4.23 as regular dependency
❯ rye sync
Reusing already existing virtualenv
Generating production lockfile: /home/sbarrios/code/pytorch-test/requirements.lock
Generating dev lockfile: /home/sbarrios/code/pytorch-test/requirements-dev.lock
Installing dependencies
Found existing installation: pytorch-test 0.1.0
Uninstalling pytorch-test-0.1.0:
  Successfully uninstalled pytorch-test-0.1.0
Looking in indexes: https://pypi.org/simple/
Looking in links: https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
Obtaining file:///. (from -r /tmp/tmps86i4tsk (line 1))
  Installing build dependencies ... done
  Checking if build backend supports build_editable ... done
  Getting requirements to build editable ... done
  Preparing editable metadata (pyproject.toml) ... done
Collecting jax==0.4.23 (from -r /tmp/tmps86i4tsk (line 2))
  Downloading jax-0.4.23-py3-none-any.whl.metadata (24 kB)
Collecting jaxlib==0.4.23+cuda12.cudnn89 (from -r /tmp/tmps86i4tsk (line 3))
  Downloading https://storage.googleapis.com/jax-releases/cuda12/jaxlib-0.4.23%2Bcuda12.cudnn89-cp311-cp311-manylinux2014_x86_64.whl (131.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 131.8/131.8 MB 55.8 MB/s eta 0:00:00
Collecting ml-dtypes==0.3.2 (from -r /tmp/tmps86i4tsk (line 4))
  Downloading ml_dtypes-0.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (20 kB)
Collecting numpy==1.26.3 (from -r /tmp/tmps86i4tsk (line 5))
  Using cached numpy-1.26.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)
Collecting nvidia-cublas-cu12==12.3.4.1 (from -r /tmp/tmps86i4tsk (line 6))
  Downloading nvidia_cublas_cu12-12.3.4.1-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-cupti-cu12==12.3.101 (from -r /tmp/tmps86i4tsk (line 7))
  Downloading nvidia_cuda_cupti_cu12-12.3.101-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cuda-nvcc-cu12==12.3.107 (from -r /tmp/tmps86i4tsk (line 8))
  Downloading nvidia_cuda_nvcc_cu12-12.3.107-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-nvrtc-cu12==12.3.107 (from -r /tmp/tmps86i4tsk (line 9))
  Downloading nvidia_cuda_nvrtc_cu12-12.3.107-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-runtime-cu12==12.3.101 (from -r /tmp/tmps86i4tsk (line 10))
  Downloading nvidia_cuda_runtime_cu12-12.3.101-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cudnn-cu12==8.9.7.29 (from -r /tmp/tmps86i4tsk (line 11))
  Downloading nvidia_cudnn_cu12-8.9.7.29-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cufft-cu12==11.0.12.1 (from -r /tmp/tmps86i4tsk (line 12))
  Downloading nvidia_cufft_cu12-11.0.12.1-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cusolver-cu12==11.5.4.101 (from -r /tmp/tmps86i4tsk (line 13))
  Downloading nvidia_cusolver_cu12-11.5.4.101-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusparse-cu12==12.2.0.103 (from -r /tmp/tmps86i4tsk (line 14))
  Downloading nvidia_cusparse_cu12-12.2.0.103-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-nccl-cu12==2.19.3 (from -r /tmp/tmps86i4tsk (line 15))
  Using cached nvidia_nccl_cu12-2.19.3-py3-none-manylinux1_x86_64.whl.metadata (1.8 kB)
Collecting nvidia-nvjitlink-cu12==12.3.101 (from -r /tmp/tmps86i4tsk (line 16))
  Using cached nvidia_nvjitlink_cu12-12.3.101-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting opt-einsum==3.3.0 (from -r /tmp/tmps86i4tsk (line 17))
  Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB)
Collecting scipy==1.12.0 (from -r /tmp/tmps86i4tsk (line 18))
  Using cached scipy-1.12.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (60 kB)
Downloading jax-0.4.23-py3-none-any.whl (1.7 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 15.3 MB/s eta 0:00:00
Downloading ml_dtypes-0.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.2/2.2 MB 71.4 MB/s eta 0:00:00
Using cached numpy-1.26.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.3 MB)
Downloading nvidia_cublas_cu12-12.3.4.1-py3-none-manylinux1_x86_64.whl (412.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 412.6/412.6 MB 28.1 MB/s eta 0:00:00
Downloading nvidia_cuda_cupti_cu12-12.3.101-py3-none-manylinux1_x86_64.whl (14.0 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.0/14.0 MB 85.7 MB/s eta 0:00:00
Downloading nvidia_cuda_nvcc_cu12-12.3.107-py3-none-manylinux1_x86_64.whl (22.0 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 22.0/22.0 MB 90.1 MB/s eta 0:00:00
Downloading nvidia_cuda_nvrtc_cu12-12.3.107-py3-none-manylinux1_x86_64.whl (24.9 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.9/24.9 MB 82.4 MB/s eta 0:00:00
Downloading nvidia_cuda_runtime_cu12-12.3.101-py3-none-manylinux1_x86_64.whl (867 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 867.7/867.7 kB 75.6 MB/s eta 0:00:00
Downloading nvidia_cudnn_cu12-8.9.7.29-py3-none-manylinux1_x86_64.whl (704.7 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 704.7/704.7 MB 18.5 MB/s eta 0:00:00
Downloading nvidia_cufft_cu12-11.0.12.1-py3-none-manylinux1_x86_64.whl (98.8 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 98.8/98.8 MB 59.0 MB/s eta 0:00:00
Downloading nvidia_cusolver_cu12-11.5.4.101-py3-none-manylinux1_x86_64.whl (125.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 125.2/125.2 MB 58.8 MB/s eta 0:00:00
Downloading nvidia_cusparse_cu12-12.2.0.103-py3-none-manylinux1_x86_64.whl (197.5 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 197.5/197.5 MB 44.0 MB/s eta 0:00:00
Using cached nvidia_nccl_cu12-2.19.3-py3-none-manylinux1_x86_64.whl (166.0 MB)
Using cached nvidia_nvjitlink_cu12-12.3.101-py3-none-manylinux1_x86_64.whl (20.5 MB)
Using cached scipy-1.12.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (38.4 MB)
Building wheels for collected packages: pytorch-test
  Building editable for pytorch-test (pyproject.toml) ... done
  Created wheel for pytorch-test: filename=pytorch_test-0.1.0-0.editable-py3-none-any.whl size=1383 sha256=9489f8b4b6f4b66f89637a58074452d7f004551ecbd83cf741b97c004d340000
  Stored in directory: /tmp/pip-ephem-wheel-cache-ctbib7o9/wheels/97/54/f5/d849319cdfa096e074df352654ee2e7c919da8951f090690c6
Successfully built pytorch-test
Installing collected packages: scipy, pytorch-test, opt-einsum, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-cusparse-cu12, nvidia-cusolver-cu12, nvidia-cufft-cu12, nvidia-cudnn-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-nvcc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, numpy, ml-dtypes, jaxlib, jax
Successfully installed jax-0.4.23 jaxlib-0.4.23+cuda12.cudnn89 ml-dtypes-0.3.2 numpy-1.26.3 nvidia-cublas-cu12-12.3.4.1 nvidia-cuda-cupti-cu12-12.3.101 nvidia-cuda-nvcc-cu12-12.3.107 nvidia-cuda-nvrtc-cu12-12.3.107 nvidia-cuda-runtime-cu12-12.3.101 nvidia-cudnn-cu12-8.9.7.29 nvidia-cufft-cu12-11.0.12.1 nvidia-cusolver-cu12-11.5.4.101 nvidia-cusparse-cu12-12.2.0.103 nvidia-nccl-cu12-2.19.3 nvidia-nvjitlink-cu12-12.3.101 opt-einsum-3.3.0 pytorch-test-0.1.0 scipy-1.12.0
Done!

sebslight avatar Feb 03 '24 04:02 sebslight

I'm also unable to reproduce (note that I already have CUDA 12 installed). Here's the end:

Building wheels for collected packages: rye-test
  Building editable for rye-test (pyproject.toml) ... done
  Created wheel for rye-test: filename=rye_test-0.1.0-py3-none-any.whl size=1313 sha256=b403fd5c2723e87860bb20ddc438d2bfbee2934549a56af41c4d2ce5010168a6
  Stored in directory: /tmp/pip-ephem-wheel-cache-z8kve1sd/wheels/8b/19/c8/73a63a20645e0f1ed9aae9dd5d459f0f7ad2332bb27cba6c0f
Successfully built rye-test
Installing collected packages: scipy, rye-test, opt-einsum, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-cusparse-cu12, nvidia-cusolver-cu12, nvidia-cufft-cu12, nvidia-cudnn-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-nvcc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, numpy, ml-dtypes, jaxlib, jax
Successfully installed jax-0.4.23 jaxlib-0.4.23+cuda12.cudnn89 ml-dtypes-0.3.2 numpy-1.26.4 nvidia-cublas-cu12-12.3.4.1 nvidia-cuda-cupti-cu12-12.3.101 nvidia-cuda-nvcc-cu12-12.3.107 nvidia-cuda-nvrtc-cu12-12.3.107 nvidia-cuda-runtime-cu12-12.3.101 nvidia-cudnn-cu12-8.9.7.29 nvidia-cufft-cu12-11.0.12.1 nvidia-cusolver-cu12-11.5.4.101 nvidia-cusparse-cu12-12.2.0.103 nvidia-nccl-cu12-2.19.3 nvidia-nvjitlink-cu12-12.3.101 opt-einsum-3.3.0 rye-test-0.1.0 scipy-1.12.0
Done!

Maybe Google fixed the link?

rachtsingh avatar Feb 07 '24 01:02 rachtsingh

I am going to close this for now, as it seems to be working at the moment. Please open a new one if it breaks again.

mitsuhiko avatar Feb 08 '24 22:02 mitsuhiko