ao icon indicating copy to clipboard operation
ao copied to clipboard

macos failed to import torchao.quantization.quantize_

Open ownia opened this issue 1 month ago • 1 comments

(.venv) ➜  tmp python3 -c "from torchao.quantization import quantize_"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
    from torchao.quantization import quantize_
  File "/Users/ownia/codespace/tmp/.venv/lib/python3.14/site-packages/torchao/__init__.py", line 110, in <module>
    from torchao.quantization import (
    ...<2 lines>...
    )
  File "/Users/ownia/codespace/tmp/.venv/lib/python3.14/site-packages/torchao/quantization/__init__.py", line 6, in <module>
    from .autoquant import (
    ...<8 lines>...
    )
  File "/Users/ownia/codespace/tmp/.venv/lib/python3.14/site-packages/torchao/quantization/autoquant.py", line 11, in <module>
    from torchao.dtypes import (
    ...<6 lines>...
    )
  File "/Users/ownia/codespace/tmp/.venv/lib/python3.14/site-packages/torchao/dtypes/__init__.py", line 1, in <module>
    from . import affine_quantized_tensor_ops
  File "/Users/ownia/codespace/tmp/.venv/lib/python3.14/site-packages/torchao/dtypes/affine_quantized_tensor_ops.py", line 11, in <module>
    from torchao.dtypes.affine_quantized_tensor import (
        AffineQuantizedTensor,
    )
  File "/Users/ownia/codespace/tmp/.venv/lib/python3.14/site-packages/torchao/dtypes/affine_quantized_tensor.py", line 17, in <module>
    from torchao.quantization.quant_primitives import (
    ...<19 lines>...
    )
  File "/Users/ownia/codespace/tmp/.venv/lib/python3.14/site-packages/torchao/quantization/quant_primitives.py", line 206, in <module>
    register_custom_op = _register_custom_op(quant_lib)
  File "/Users/ownia/codespace/tmp/.venv/lib/python3.14/site-packages/torchao/utils.py", line 210, in _register_custom_op
    from torch._inductor.decomposition import register_decomposition
  File "/Users/ownia/codespace/tmp/.venv/lib/python3.14/site-packages/torch/_inductor/decomposition.py", line 14, in <module>
    import torch.ao.quantization.fx._decomposed
  File "/Users/ownia/codespace/tmp/.venv/lib/python3.14/site-packages/torch/ao/quantization/fx/__init__.py", line 3, in <module>
    from .prepare import prepare
  File "/Users/ownia/codespace/tmp/.venv/lib/python3.14/site-packages/torch/ao/quantization/fx/prepare.py", line 31, in <module>
    from torch.ao.quantization.quantizer import (
    ...<6 lines>...
    )
  File "/Users/ownia/codespace/tmp/.venv/lib/python3.14/site-packages/torch/ao/quantization/quantizer/__init__.py", line 1, in <module>
    from .quantizer import (
    ...<8 lines>...
    )
  File "/Users/ownia/codespace/tmp/.venv/lib/python3.14/site-packages/torch/ao/quantization/quantizer/quantizer.py", line 85, in <module>
    EdgeOrNode.__module__ = "torch.ao.quantization.quantizer.quantizer"
    ^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'typing.Union' object has no attribute '__module__' and no __dict__ for setting new attributes. Did you mean: '__reduce__'?

ownia avatar Nov 03 '25 17:11 ownia

i think this might have to do with #158184, it seems like setting MyClass.__module__ is not supported in python 3.14 anymore, but the PR missed updating this.

cc @jerryzh168 @andrewor14

you can try downgrading your Python version to unblock for now

liangel-02 avatar Nov 03 '25 21:11 liangel-02

will close for now since we have a workaround, inductor also have lower python version support now I think, probably 3.11 / 3.12

Edit: I saw 3.13: https://dev-discuss.pytorch.org/t/torch-compile-support-for-python-3-13-completed/2738 but didn't see 3.14 yet

jerryzh168 avatar Dec 05 '25 18:12 jerryzh168

Hi @jerryzh168, could you point out which workaround it is?

ownia avatar Dec 08 '25 02:12 ownia