transformerlab-app icon indicating copy to clipboard operation
transformerlab-app copied to clipboard

Intel MacOs torch 2.6 no longer in conda

Open bumpyetching0a opened this issue 9 months ago • 1 comments

My machine

iMac 2019 21.5, intel i7 8700, 64 GB ram, 1tb NVME, RX Vega 56.

The problem

  × No solution found when resolving dependencies:   ╰─▶ Because torch==2.6.0 has no wheels with a matching platform tag (e.g.,       macosx_10_16_x86_64) and you require torch==2.6.0, we can conclude       that your requirements are unsatisfiable.

      hint: Wheels are available for torch (v2.6.0) on the following       platforms: manylinux_2_28_aarch64, manylinux1_x86_64,       macosx_11_0_arm64, win_amd64

Fix

This is fixable but it is a pain. TBH am not sure how I did it as I was working with Gemini, but

requirements-no-gpu-uv.txt found in transformerlab/src

was among other things modified to use torch 2.2.2 and tokenizers was deleted.

Request of developers

Hoping app developers can automate this going forward!

Attached as courtesy

This file was autogenerated by uv via the following command:

uv pip compile requirements.in -o requirements-no-gpu-uv.txt

absl-py==2.1.0 # via # rouge-score # tensorboard accelerate==1.3.0 # via # -r requirements.in # peft aiofiles==24.1.0 # via -r requirements.in aiohappyeyeballs==2.4.4 # via aiohttp aiohttp==3.11.11 # via # datasets # fschat # fsspec aiosignal==1.3.2 # via aiohttp aiosqlite==0.20.0 # via -r requirements.in annotated-types==0.7.0 # via pydantic anyio==4.8.0 # via # httpx # starlette # watchfiles attrs==25.1.0 # via aiohttp brotli==1.1.0 # via py7zr certifi==2024.7.4 # via # httpcore # httpx # requests # sentry-sdk charset-normalizer==2.1.1 # via requests click==8.1.8 # via # nltk # uvicorn # wandb datasets==2.19.1 # via # -r requirements.in # evaluate dill==0.3.8 # via # datasets # evaluate # multiprocess docker-pycreds==0.4.0 # via wandb einops==0.8.0 # via -r requirements.in evaluate==0.4.3 # via -r requirements.in fastapi==0.115.7 # via # -r requirements.in # fschat filelock==3.13.1 # via # datasets # huggingface-hub # torch # transformers frozenlist==1.5.0 # via # aiohttp # aiosignal fschat==0.2.36 # via -r requirements.in fsspec==2024.2.0 # via # datasets # evaluate # huggingface-hub # torch gitdb==4.0.12 # via gitpython gitpython==3.1.44 # via wandb grpcio==1.70.0 # via tensorboard h11==0.14.0 # via # httpcore # uvicorn httpcore==1.0.7 # via httpx httpx==0.28.1 # via fschat huggingface-hub==0.28.0 # via # accelerate # datasets # evaluate # peft # tokenizers # transformers idna==3.7 # via # anyio # httpx # requests # yarl inflate64==1.0.1 # via py7zr jinja2==3.1.5 # via torch joblib==1.4.2 # via nltk latex2mathml==3.77.0 # via markdown2 loralib==0.1.2 # via -r requirements.in markdown==3.7 # via tensorboard markdown-it-py==3.0.0 # via rich markdown2==2.5.3 # via fschat markupsafe==2.1.5 # via # jinja2 # werkzeug mdurl==0.1.2 # via markdown-it-py mpmath==1.3.0 # via sympy multidict==6.1.0 # via # aiohttp # yarl multiprocess==0.70.16 # via # datasets # evaluate multivolumefile==0.2.3 # via py7zr networkx==3.3 # via torch nh3==0.2.20 # via fschat nltk==3.9.1 # via rouge-score numpy>=1.26.0,<=1.26.4 # via # accelerate # datasets # evaluate # fschat # pandas # peft # rouge-score # scipy # tensorboard # torchvision # transformers nvidia-ml-py3==7.352.0 # via -r requirements.in packaging==24.1 # via # -r requirements.in # accelerate # datasets # evaluate # huggingface-hub # peft # tensorboard # transformers pandas>=2.0.0,<=2.1.4 # via # datasets # evaluate peft==0.14.0 # via -r requirements.in pillow==11.0.0 # via torchvision platformdirs==4.3.6 # via wandb prompt-toolkit==3.0.50 # via fschat propcache==0.2.1 # via # aiohttp # yarl protobuf==5.29.3 # via # tensorboard # wandb psutil==6.1.1 # via # -r requirements.in # accelerate # peft # py7zr # wandb py7zr==0.22.0 # via -r requirements.in pyarrow==19.0.0 # via datasets pyarrow-hotfix==0.6 # via datasets pybcj==1.0.3 # via py7zr pycryptodomex==3.21.0 # via py7zr pydantic==2.10.6 # via # -r requirements.in # fastapi # fschat # sqlmodel # wandb pydantic-core==2.27.2 # via pydantic pygments==2.19.1 # via # markdown2 # rich pyppmd==1.1.1 # via py7zr python-dateutil==2.9.0.post0 # via pandas python-multipart==0.0.20 # via -r requirements.in pytz==2024.2 # via pandas pyyaml==6.0.2 # via # accelerate # datasets # huggingface-hub # peft # transformers # wandb # wavedrom pyzstd==0.16.2 # via py7zr regex==2024.11.6 # via # nltk # tiktoken # transformers requests==2.32.2 # via # datasets # evaluate # fschat # huggingface-hub # tiktoken # transformers # wandb rich==13.9.4 # via fschat rouge-score==0.1.2 # via -r requirements.in safetensors==0.5.2 # via # accelerate # peft # transformers scipy>=1.11.0,<=1.12.0 # via -r requirements.in sentencepiece==0.2.0 # via -r requirements.in sentry-sdk==2.22.0 # via wandb setproctitle==1.3.5 # via wandb setuptools==70.2.0 # via # tensorboard # wandb shortuuid==1.0.13 # via fschat six==1.17.0 # via # docker-pycreds # python-dateutil # rouge-score # tensorboard # wavedrom smmap==5.0.2 # via gitdb sniffio==1.3.1 # via anyio sqlalchemy==2.0.38 # via sqlmodel sqlmodel==0.0.23 # via -r requirements.in starlette==0.45.3 # via fastapi svgwrite==1.4.3 # via wavedrom sympy==1.13.1 # via torch tensorboard==2.18.0 # via -r requirements.in tensorboard-data-server==0.7.2 # via tensorboard texttable==1.7.0 # via py7zr tiktoken==0.8.0 # via # -r requirements.in # fschat torch==2.2.2 # via # -r requirements.in # accelerate # peft # torchaudio # torchvision torchaudio==2.2.2 # via -r requirements.in torchvision==0.17.2 # via -r requirements.in tqdm==4.66.5 # via # datasets # evaluate # huggingface-hub # nltk # peft # transformers transformers==4.48.0 # via # -r requirements.in # peft typing-extensions==4.12.2 # via # aiosqlite # anyio # fastapi # huggingface-hub # pydantic # pydantic-core # sqlalchemy # torch # wandb tzdata==2025.1 # via pandas urllib3==1.26.19 # via # requests # sentry-sdk uvicorn==0.34.0 # via fschat wandb==0.19.7 # via -r requirements.in watchfiles==1.0.4 # via -r requirements.in wavedrom==2.0.3.post3 # via markdown2 wcwidth==0.2.13 # via prompt-toolkit werkzeug==3.1.3 # via # -r requirements.in # tensorboard xxhash==3.5.0 # via # datasets # evaluate yarl==1.18.3 # via aiohttp

bumpyetching0a avatar Mar 13 '25 01:03 bumpyetching0a

Hey @bumpyetching0a, Thanks so much for providing the file, would it be possible for you to open a PR and submit this file so we can test things out?

deep1401 avatar Apr 15 '25 18:04 deep1401

Hi @bumpyetching0a, We have moved our torch version to 2.7 which does not support Intel Macs as shown in the Pytorch Announcement. To maintain system uniformity, we have also decided to deprecate support for intel macs since the torch binaries for the hardware are no longer developed.

deep1401 avatar May 13 '25 22:05 deep1401

Disappointing but probably the right move sadly.  My fix had worked but has since broken again, and I was unable to again repeat and rinse - hence no reply earlier...Cheers at least for trying!On 14 May 2025, at 08:35, Deep Gandhi @.***> wrote: deep1401 left a comment (transformerlab/transformerlab-app#317) Hi @bumpyetching0a, We have moved our torch version to 2.7 which does not support Intel Macs as shown in the Pytorch Announcement. To maintain system uniformity, we have also decided to deprecate support for intel macs since the torch binaries for the hardware are no longer developed.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>

bumpyetching0a avatar May 14 '25 00:05 bumpyetching0a