ao
ao copied to clipboard
Implement aten.add for IntxUnpackedToInt8Tensor
:link: Helpful Links
:test_tube: See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3304
- :page_facing_up: Preview Python docs built from this PR
Note: Links to docs will display an error until the docs builds have been completed.
:x: 10 New Failures
As of commit 6cf9b150b1c37a147e1f182746c1d7c08e18900b with merge base 6259e9885f0f0a9124dfbb7cd23bdcbf30bb1984 ():
NEW FAILURES - The following jobs have failed:
- PR Label Check / Check PR Labels (gh)
Process completed with exit code 1. - Run 1xL4 Tests / test (SM-89, linux.g6.4xlarge.experimental.nvidia.gpu, --pre torch --index-url https://download.p... / linux-job (gh)
test/float8/test_base.py::TestFloat8Linear::test_linear_from_recipe[linear_dtype2-False-x_shape2-Float8LinearRecipeName.ROWWISE_WITH_GW_HP] - Run Regression Tests / test (CPU 2.6, linux.4xlarge, torch==2.6.0 --index-url https://download.pytorch.org/whl/cpu, cpu) / linux-job (gh)
test/test_low_bit_optim.py::TestOptim::test_param_groups_optim_name_AdamFp8_device_cpu - Run Regression Tests / test (CPU 2.7, linux.4xlarge, torch==2.7.0 --index-url https://download.pytorch.org/whl/cpu, cpu) / linux-job (gh)
test/test_low_bit_optim.py::TestOptim::test_param_groups_optim_name_AdamFp8_device_cpu - Run Regression Tests / test (CPU 2.8, linux.4xlarge, torch==2.8.0 --index-url https://download.pytorch.org/whl/cpu, cpu) / linux-job (gh)
test/test_low_bit_optim.py::TestOptim::test_param_groups_optim_name_AdamFp8_device_cpu - Run Regression Tests / test (CUDA 2.6, linux.g5.12xlarge.nvidia.gpu, torch==2.6.0, cuda, 12.6) / linux-job (gh)
test/float8/test_base.py::TestScaledMM::test_pad_inner_dim[False-base_dtype2] - Run Regression Tests / test (CUDA 2.7, linux.g5.12xlarge.nvidia.gpu, torch==2.7.0, cuda, 12.6) / linux-job (gh)
test/float8/test_base.py::TestScaledMM::test_pad_inner_dim[False-base_dtype2] - Run Regression Tests / test (CUDA 2.8, linux.g5.12xlarge.nvidia.gpu, torch==2.8.0, cuda, 12.6) / linux-job (gh)
test/float8/test_base.py::TestScaledMM::test_pad_inner_dim[False-base_dtype2] - Run Regression Tests / test-nightly (CPU Nightly, linux.4xlarge, --pre torch --index-url https://download.pytorch.org/wh... / linux-job (gh)
test/test_low_bit_optim.py::TestOptim::test_param_groups_optim_name_AdamFp8_device_cpu - Run Regression Tests / test-nightly (CUDA Nightly, linux.g5.12xlarge.nvidia.gpu, --pre torch --index-url https://downloa... / linux-job (gh)
test/float8/test_base.py::TestScaledMM::test_pad_inner_dim[False-base_dtype2]
This comment was automatically generated by Dr. CI and updates every 15 minutes.
I think we should not add an implementation for add unless there is a usecase. It is not required for quantizing embedding/linear, the main uses for quantize_.