ao icon indicating copy to clipboard operation
ao copied to clipboard

torchao release compatibility table

Open vkuzo opened this issue 3 months ago • 31 comments

This issue describes the compatibility matrix between torchao releases and its dependencies. If you are seeing an error when importing torchao that looks like this,

(pytorch_nightly) [[email protected] ~/local]$ python -c "import torchao"
Fatal Python error: Aborted

then most likely you can resolve this error by ensuring that the torch version in your environment is compatible with the torch version used to build your torchao version.

torch

torchao version torch version torch version, torchao's Python API only
0.15.0dev (nightly) 2.10.0dev (nightly) 2.10.0, 2.9.0, 2.8.0
0.14.1 2.9.0 2.9.0, 2.8.0, 2.7.1
0.13.0 2.8.0 2.8.0, 2.7.1, 2.6.0
0.12.0 2.7.1, 2.6.0, 2.5.0 n/a

fbgemm_gpu

torchao has an optional runtime dependency on fbgemm_gpu. Please see https://docs.pytorch.org/FBGEMM/general/Releases.html for the compatibility matrix for fbgemm_gpu. Note that while torchao's Python API supports multiple torch versions, each fbgemm_gpu version only supports a single torch version. Therefore, if you are using torchao together with fbgemm_gpu, you should use the torch version corresponding to your fbgemm_gpu version.

vkuzo avatar Sep 02 '25 11:09 vkuzo

any ideas when we can fix the compatibility of torchao nightly with torch? currently it's blocking tests in vllm: https://github.com/vllm-project/vllm/pull/21982/files#diff-2fe466060a88bb6a57175df8ca7175849db82a2cf2ba082295d481ab57e58868R512

jerryzh168 avatar Sep 10 '25 23:09 jerryzh168

I'm also having the same issue here. Blocking testing executorch with torch nightly.

I have torch on latest master (with some python-only local changes) '2.9.0a0+git3564a8a', and torchao torchao-0.14.0.dev20250909+cu126.

yushangdi avatar Sep 11 '25 01:09 yushangdi

I'm also having the same issue here. Blocking testing executorch with torch nightly.

I have torch on latest master (with some python-only local changes) '2.9.0a0+git3564a8a', and torchao torchao-0.14.0.dev20250909+cu126.

I have it resolved by installing pip install fbgemm-gpu-nightly to override fbgemm-gpu. So far importing torchao doesn't error anymore. Not sure if there're any other issues if I actually run anything.

yushangdi avatar Sep 11 '25 01:09 yushangdi

executorch doesn't need fbgem-gpu I think? if you use torchao nightly, can you also use torch nightly in ET?

jerryzh168 avatar Sep 11 '25 02:09 jerryzh168

we also need to update the following check https://github.com/pytorch/ao/blob/ea8c00fc90c99f0bf19fe87d22eb186c3dd19bf6/torchao/init.py#L38C45-L38C76 for PyTorch 2.10.x, as str(torch.__version__) >= "2.9" will not work properly for PyTorch 2.10

vkuzo avatar Sep 15 '25 19:09 vkuzo

@liangel-02 , https://github.com/facebookresearch/FBGEMM/pull/1900/files might be relevant here - this is fbgemm fixing the same issue in their repo

vkuzo avatar Sep 19 '25 10:09 vkuzo

Getting the error on all versions: Skipping import of cpp extensions due to incompatible torch version 2.9.0+cu128 for torchao version 0.14.0 Skipping import of cpp extensions due to incompatible torch version 2.8.0+cu129 for torchao version 0.14.0 Skipping import of cpp extensions due to incompatible torch version 2.8.0+cu128 for torchao version 0.14.0

steveepreston avatar Oct 15 '25 09:10 steveepreston

This makes it very hard to use torchao with latest versions of Ray Serve/vLLM: Skipping import of cpp extensions due to incompatible torch version 2.8.0+cu128 for torchao version 0.14.0

NeonSludge avatar Oct 15 '25 11:10 NeonSludge

Hi folks, thank you for reporting, we are looking into this and will provide an update soon. Note that unless you actually need c++ or CUDA kernels that ship with torchao, you can ignore the warning and use the Python-only APIs without issues.

vkuzo avatar Oct 15 '25 13:10 vkuzo

PyTorch release 2.9.0 eta date is today : https://dev-discuss.pytorch.org/t/pytorch-2-9-final-rc-available/3245

atalman avatar Oct 15 '25 14:10 atalman

still have this warning message for torch 2.9.0:

Skipping import of cpp extensions due to incompatible torch version 2.9.0+cu128 for torchao version 0.14.0         Please see GitHub issue #2919 for more info

DefTruth avatar Oct 16 '25 08:10 DefTruth

These frustrating warnings keeps appearing repeatedly on all envs I working. Please fix it or mute it

steveepreston avatar Oct 16 '25 09:10 steveepreston

Hi all, we plan to release a 0.14.1 as soon as possible. The new version will be built against 2.9.0 and it will load the cpp extensions when used against this version. You won't see this warning anymore.

andrewor14 avatar Oct 16 '25 13:10 andrewor14

Are there any plans to support CUDA 13.0? Thank you.

lisi31415926 avatar Oct 17 '25 19:10 lisi31415926

@lisi31415926 Hold on to escape from current state first.

steveepreston avatar Oct 17 '25 20:10 steveepreston

Update: ETA for 0.14.1 release is 10/20 (Mon). Thank you everyone for your patience.

Are there any plans to support CUDA 13.0? Thank you.

Yes, this should be supported

andrewor14 avatar Oct 17 '25 21:10 andrewor14

I don't get the warning from torchao 0.13.0 version.

Is the warning from 0.14.0 version actually hurt the performance? If it is just a random warning, I can just ignore it.

JamesSand avatar Oct 18 '25 05:10 JamesSand

Is the warning from 0.14.0 version actually hurt the performance? If it is just a random warning, I can just ignore it.

Most users can just ignore it since they're only using the python APIs. It doesn't actually hurt the performance.

andrewor14 avatar Oct 20 '25 14:10 andrewor14

@vkuzo @andrewor14 @liangel-02 In the compatibility table, can we add python3.10 is the minimum supported version, as there won't be any torch2.9 for python <= 3.9

jainapurva avatar Oct 20 '25 18:10 jainapurva

Update: ETA for 0.14.1 release is 10/20 (Mon). Thank you everyone for your patience.

Are there any plans to support CUDA 13.0? Thank you.

Yes, this should be supported

Today is 10/23 :smile:

Freed-Wu avatar Oct 23 '25 13:10 Freed-Wu

@andrewor14 Any update on 0.14.1? It is painful to see these logs.

WARNING  torchao:__init__.py:81 Skipping import of cpp extensions due to incompatible torch version 2.9.0+cu130 for torchao version 0.14.0         Please see GitHub issue #2919 for more info

Qubitium avatar Oct 23 '25 14:10 Qubitium

Hi everyone, there have been some delays due to some compatibility problems between the fbgemm_gpu wheels and torch 2.9.0. They are working on fixing the problems, hopefully we will be able to release today.

andrewor14 avatar Oct 23 '25 14:10 andrewor14

We just released torchao 0.14.1, thank you everyone for your patience. This version is compatible with torch 2.9.0. Please install these as follows:

# default cuda 12.8
pip install torch
pip install torchao

# or specify custom cuda version, one of [126, 128, 129, 130]
pip install torch --index-url https://download.pytorch.org/whl/cu129
pip install torchao --index-url https://download.pytorch.org/whl/cu129

If you're using PTQ through configs like Int4WeightOnlyConfig or Float8DynamicActivationFloat8WeightConfig, please additionally upgrade your fbgemm_gpu_genai to 1.4.1, which is compatible with torch 2.9.0:

# specify custom cuda version, one of [126, 128, 129, 130]
pip install fbgemm_gpu_genai --index-url https://download.pytorch.org/whl/cu129

andrewor14 avatar Oct 24 '25 04:10 andrewor14

@andrewor14 Can you check if the cu130 channel is populated or release builders are still chugging away and have not yet pushed the wheels?

(vm313t) root@gpu-base:~/gptqmodel# pip show torch
Name: torch
Version: 2.9.0+cu130

(vm313t) root@gpu-base:~/gptqmodel# pip install torchao --index-url https://download.pytorch.org/whl/cu130 -U
Looking in indexes: https://download.pytorch.org/whl/cu130
Requirement already satisfied: torchao in /root/vm313t/lib/python3.13t/site-packages (0.14.0)

(vm313t) root@gpu-base:~/gptqmodel# pip install torchao==0.14.1 --index-url https://download.pytorch.org/whl/cu130 -U
Looking in indexes: https://download.pytorch.org/whl/cu130
ERROR: Could not find a version that satisfies the requirement torchao==0.14.1 (from versions: none)
ERROR: No matching distribution found for torchao==0.14.1        

Qubitium avatar Oct 24 '25 06:10 Qubitium

@Qubitium This seems to work for me, try again?

$ pip install torchao==0.14.1 --index-url https://download.pytorch.org/whl/cu130 -U
Looking in indexes: https://download.pytorch.org/whl/cu130
Collecting torchao==0.14.1
  Using cached https://download.pytorch.org/whl/cu130/torchao-0.14.1%2Bcu130-cp310-abi3-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.metadata (19 kB)
Using cached https://download.pytorch.org/whl/cu130/torchao-0.14.1%2Bcu130-cp310-abi3-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (7.3 MB)
Installing collected packages: torchao
Successfully installed torchao-0.14.1+cu130

andrewor14 avatar Oct 24 '25 15:10 andrewor14

@andrewor14 I think the pytorch team is missing Python 3.13 and 3.14 wheels? This is strange.Just tried and failed

On Ubuntu 24.04 x86_64

(vm313t) root@gpu-base:~/gptqmodel# pip install torchao==0.14.1 --index-url https://download.pytorch.org/whl/cu130 -U
Looking in indexes: https://download.pytorch.org/whl/cu130
ERROR: Could not find a version that satisfies the requirement torchao==0.14.1 (from versions: none)
ERROR: No matching distribution found for torchao==0.14.1
(vm313t) root@gpu-base:~/gptqmodel# python --version
Python 3.13.8
(vm313t) root@gpu-base:~/gptqmodel# python --version
Python 3.13.8
(vm313t) root@gpu-base:~/gptqmodel# pip show torch
Name: torch
Version: 2.9.0+cu130
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org
Author: 
Author-email: PyTorch Team <[email protected]>
License: BSD-3-Clause
Location: /root/vm313t/lib/python3.13t/site-packages
Requires: filelock, fsspec, jinja2, networkx, nvidia-cublas, nvidia-cuda-cupti, nvidia-cuda-nvrtc, nvidia-cuda-runtime, nvidia-cudnn-cu13, nvidia-cufft, nvidia-cufile, nvidia-curand, nvidia-cusolver, nvidia-cusparse, nvidia-cusparselt-cu13, nvidia-nccl-cu13, nvidia-nvjitlink, nvidia-nvshmem-cu13, nvidia-nvtx, setuptools, sympy, triton, typing-extensions
Required-by: accelerate, bitblas, causal_conv1d, flash_attn, GPTQModel, lm_eval, MemLord, peft, torchvision

Maybe because my Python 3.13.8 is the nogil build? I have no clue why it's not working for me or what magic version/abi combo pip is matching.

Qubitium avatar Oct 24 '25 15:10 Qubitium

@Qubitium I can repro this. I don't think we are building wheels for the free-threaded version currently. Would you mind creating an issue for adding free-threaded support to torchao? .

Also, running pip install torchao without specifying the index-url works for me. I believe this is installing just the python components of torchao without the CUDA kernels. Does that work for you or do you need the CUDA builds?

jcaip avatar Oct 24 '25 16:10 jcaip

@jcaip Issue created. https://github.com/pytorch/ao/issues/3243

And yes pip install torchao -U worked and was able to install 0.14.1 of torchao. As far as the cuda builds, I have no idea. Latest transformers model kernels is auto importing TorchAO so I had to install it. Not sure if it it needs the cuda part frankly.

Qubitium avatar Oct 24 '25 17:10 Qubitium

Will this get updated for pytorch 2.9.1 which was released 2 weeks ago? Seeing:

Skipping import of cpp extensions due to incompatible torch version 2.9.1+cu128 for torchao version 0.14.1+cu128             Please see https://github.com/pytorch/ao/issues/2919 for more info

tonyf avatar Nov 25 '25 17:11 tonyf

@tonyf thank you for callout, we will discuss and report back here

vkuzo avatar Nov 26 '25 14:11 vkuzo