Add int4 choosing qprams algorithm test for AWQ
Summary:
Add quantized parameter choosing algorithm for int4 weight-only quantization (int4_choose_qparams_algorithm) test
Related Issue/PR: https://github.com/pytorch/ao/pull/3106#issuecomment-3379278495
Test plan: test/prototype/test_awq.py
:link: Helpful Links
:test_tube: See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3148
- :page_facing_up: Preview Python docs built from this PR
Note: Links to docs will display an error until the docs builds have been completed.
This comment was automatically generated by Dr. CI and updates every 15 minutes.
wait, no fix is needed? does the test of int4 hqq config work? I remember it's failing for me
wait, no fix is needed? does the test of int4 hqq config work? I remember it's failing for me
Not sure for the true reason, but it failed and was overridden to Int4Tensor, not Int4TilePackedTo4dTensor, and generates dispatch error.
And Int4WeightOnlyConfig(int4_packing_format="tile_packed_to_4d", int4_choose_qparams_algorithm=Int4ChooseQParamsAlgorithm.HQQ) did work.
And
Int4WeightOnlyConfig(int4_packing_format="tile_packed_to_4d", int4_choose_qparams_algorithm=Int4ChooseQParamsAlgorithm.HQQ)did work.
you mean the test_awq_functionality failed right?
And
Int4WeightOnlyConfig(int4_packing_format="tile_packed_to_4d", int4_choose_qparams_algorithm=Int4ChooseQParamsAlgorithm.HQQ)did work.you mean the test_awq_functionality failed right?
No, it worked. For test_awq_functionality case, it can be summarized to:
- Old: Fail (override to
Int4Tensor, notInt4TilePackedTo4dTensor) Int4WeightOnlyConfig(int4_packing_format="tile_packed_to_4d", int4_choose_qparams_algorithm=Int4ChooseQParamsAlgorithm.HQQ): No fail;Int4TilePackedTo4dTensoris loaded- New (in this PR): No fail
The reason why Int4Tensor generates a dispatch error seems to be that they use different kernels:
Int4Tensor: Row-wise FBGEMM kernel is called bytorch.ops.fbgemm.bf16i4bf16_rowwisehttps://github.com/pytorch/ao/blob/bb65dbc2649077729e8afb39a73ddef0d2adcb8f/torchao/quantization/quantize_/workflows/int4/int4_tensor.py#L187Int4TilePackedTo4dTensor: TinyGEMM kernel https://github.com/pytorch/ao/blob/bb65dbc2649077729e8afb39a73ddef0d2adcb8f/torchao/quantization/quantize_/workflows/int4/int4_tile_packed_to_4d_tensor.py#L54
for the script:
import torch
from torchao.quantization import Int4WeightOnlyConfig
from torchao.utils import _is_fbgemm_gpu_genai_available, torch_version_at_least
devices = ["cpu", "cuda"]
device_to_base_configs = {
"cuda": [
Int4WeightOnlyConfig(group_size=128),
# Note: the functionality unit test doesn't work for hqq
Int4WeightOnlyConfig(group_size=128, int4_packing_format="tile_packed_to_4d"),
Int4WeightOnlyConfig(
group_size=128,
int4_packing_format="tile_packed_to_4d",
int4_choose_qparams_algorithm="hqq",
),
],
}
for i, cfg in enumerate(device_to_base_configs["cuda"]):
print(f"Config {i}:")
print(f" packing_format: {cfg.int4_packing_format}")
print(f" choose_qparams: {cfg.int4_choose_qparams_algorithm}")
I get:
Config 0:
packing_format: Int4PackingFormat.PLAIN
choose_qparams: Int4ChooseQParamsAlgorithm.TINYGEMM
Config 1:
packing_format: tile_packed_to_4d
choose_qparams: Int4ChooseQParamsAlgorithm.TINYGEMM
Config 2:
packing_format: tile_packed_to_4d
choose_qparams: hqq
and it seems to be running locally, the test_awq_functionality works for the hqq algorithm as well
for the script:
import torch from torchao.quantization import Int4WeightOnlyConfig from torchao.utils import _is_fbgemm_gpu_genai_available, torch_version_at_least devices = ["cpu", "cuda"] device_to_base_configs = { "cuda": [ Int4WeightOnlyConfig(group_size=128), # Note: the functionality unit test doesn't work for hqq Int4WeightOnlyConfig(group_size=128, int4_packing_format="tile_packed_to_4d"), Int4WeightOnlyConfig( group_size=128, int4_packing_format="tile_packed_to_4d", int4_choose_qparams_algorithm="hqq", ), ], } for i, cfg in enumerate(device_to_base_configs["cuda"]): print(f"Config {i}:") print(f" packing_format: {cfg.int4_packing_format}") print(f" choose_qparams: {cfg.int4_choose_qparams_algorithm}")I get:
Config 0: packing_format: Int4PackingFormat.PLAIN choose_qparams: Int4ChooseQParamsAlgorithm.TINYGEMM Config 1: packing_format: tile_packed_to_4d choose_qparams: Int4ChooseQParamsAlgorithm.TINYGEMM Config 2: packing_format: tile_packed_to_4d choose_qparams: hqqand it seems to be running locally, the test_awq_functionality works for the hqq algorithm as well
Yeah it returns no fail for test_awq.py locally. We don't have to debug them, and can just add that test case, right?
Hmm... I truly remember getting the following error:
torchao/prototype/awq/core.py:91: in calculate_qparams
q_out = F.linear(acc / scales, w, self.bias)
↓
torchao/quantization/quantize_/workflows/int4/int4_tensor.py:161
res = torch.ops.fbgemm.bf16i4bf16_rowwise(...)
↓
RuntimeError: cutlass cannot initialize
But everything is resolved without any change...? Mamma mia :confused:
for the script:
import torch from torchao.quantization import Int4WeightOnlyConfig from torchao.utils import _is_fbgemm_gpu_genai_available, torch_version_at_least devices = ["cpu", "cuda"] device_to_base_configs = { "cuda": [ Int4WeightOnlyConfig(group_size=128), # Note: the functionality unit test doesn't work for hqq Int4WeightOnlyConfig(group_size=128, int4_packing_format="tile_packed_to_4d"), Int4WeightOnlyConfig( group_size=128, int4_packing_format="tile_packed_to_4d", int4_choose_qparams_algorithm="hqq", ), ], } for i, cfg in enumerate(device_to_base_configs["cuda"]): print(f"Config {i}:") print(f" packing_format: {cfg.int4_packing_format}") print(f" choose_qparams: {cfg.int4_choose_qparams_algorithm}")I get:
Config 0: packing_format: Int4PackingFormat.PLAIN choose_qparams: Int4ChooseQParamsAlgorithm.TINYGEMM Config 1: packing_format: tile_packed_to_4d choose_qparams: Int4ChooseQParamsAlgorithm.TINYGEMM Config 2: packing_format: tile_packed_to_4d choose_qparams: hqqand it seems to be running locally, the test_awq_functionality works for the hqq algorithm as well
Yeah it returns no fail for
test_awq.pylocally. We don't have to debug them, and can just add that test case, right?Hmm... I truly remember getting the following error:
torchao/prototype/awq/core.py:91: in calculate_qparams q_out = F.linear(acc / scales, w, self.bias) ↓ torchao/quantization/quantize_/workflows/int4/int4_tensor.py:161 res = torch.ops.fbgemm.bf16i4bf16_rowwise(...) ↓ RuntimeError: cutlass cannot initializeBut everything is resolved without any change...? Mamma mia 😕
yeah it runs for me locally without any changes. not sure why you get the error, seems like some env issues, are you using an H100 machine? fbgemm is only available in H100.
yeah it runs for me locally without any changes. not sure why you get the error, seems like some env issues, are you using an H100 machine? fbgemm is only available in H100.
Yes I use H100 for only allocated time (luckily now) and mostly use A100. Updated title and context for this change.
CI failure (test/quantization/pt2e/test_quantize_pt2e_qat.py) looks unrelated.
@pytorchbot label "topic: not user facing"
can you rebase? I've seen the CI errors before but they should be fixed now I feel
Sorry I used wrong git command, fixing it.
@jerryzh168 fixed rebase, please take a look at it.