ao icon indicating copy to clipboard operation
ao copied to clipboard

Add int4 choosing qprams algorithm test for AWQ

Open namgyu-youn opened this issue 2 months ago • 16 comments

Summary: Add quantized parameter choosing algorithm for int4 weight-only quantization (int4_choose_qparams_algorithm) test

Related Issue/PR: https://github.com/pytorch/ao/pull/3106#issuecomment-3379278495

Test plan: test/prototype/test_awq.py

namgyu-youn avatar Oct 10 '25 16:10 namgyu-youn

:link: Helpful Links

:test_tube: See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3148

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

pytorch-bot[bot] avatar Oct 10 '25 16:10 pytorch-bot[bot]

wait, no fix is needed? does the test of int4 hqq config work? I remember it's failing for me

jerryzh168 avatar Oct 10 '25 16:10 jerryzh168

wait, no fix is needed? does the test of int4 hqq config work? I remember it's failing for me

Not sure for the true reason, but it failed and was overridden to Int4Tensor, not Int4TilePackedTo4dTensor, and generates dispatch error.

namgyu-youn avatar Oct 10 '25 16:10 namgyu-youn

And Int4WeightOnlyConfig(int4_packing_format="tile_packed_to_4d", int4_choose_qparams_algorithm=Int4ChooseQParamsAlgorithm.HQQ) did work.

namgyu-youn avatar Oct 10 '25 17:10 namgyu-youn

And Int4WeightOnlyConfig(int4_packing_format="tile_packed_to_4d", int4_choose_qparams_algorithm=Int4ChooseQParamsAlgorithm.HQQ) did work.

you mean the test_awq_functionality failed right?

jerryzh168 avatar Oct 10 '25 17:10 jerryzh168

And Int4WeightOnlyConfig(int4_packing_format="tile_packed_to_4d", int4_choose_qparams_algorithm=Int4ChooseQParamsAlgorithm.HQQ) did work.

you mean the test_awq_functionality failed right?

No, it worked. For test_awq_functionality case, it can be summarized to:

  • Old: Fail (override to Int4Tensor, not Int4TilePackedTo4dTensor)
  • Int4WeightOnlyConfig(int4_packing_format="tile_packed_to_4d", int4_choose_qparams_algorithm=Int4ChooseQParamsAlgorithm.HQQ): No fail; Int4TilePackedTo4dTensor is loaded
  • New (in this PR): No fail

namgyu-youn avatar Oct 10 '25 17:10 namgyu-youn

The reason why Int4Tensor generates a dispatch error seems to be that they use different kernels:

  • Int4Tensor: Row-wise FBGEMM kernel is called by torch.ops.fbgemm.bf16i4bf16_rowwise https://github.com/pytorch/ao/blob/bb65dbc2649077729e8afb39a73ddef0d2adcb8f/torchao/quantization/quantize_/workflows/int4/int4_tensor.py#L187
  • Int4TilePackedTo4dTensor: TinyGEMM kernel https://github.com/pytorch/ao/blob/bb65dbc2649077729e8afb39a73ddef0d2adcb8f/torchao/quantization/quantize_/workflows/int4/int4_tile_packed_to_4d_tensor.py#L54

namgyu-youn avatar Oct 10 '25 17:10 namgyu-youn

for the script:

import torch
from torchao.quantization import Int4WeightOnlyConfig
from torchao.utils import _is_fbgemm_gpu_genai_available, torch_version_at_least

devices = ["cpu", "cuda"]
device_to_base_configs = {
    "cuda": [
        Int4WeightOnlyConfig(group_size=128),
        # Note: the functionality unit test doesn't work for hqq
        Int4WeightOnlyConfig(group_size=128, int4_packing_format="tile_packed_to_4d"),
        Int4WeightOnlyConfig(
            group_size=128,
            int4_packing_format="tile_packed_to_4d",
            int4_choose_qparams_algorithm="hqq",
        ),
    ],
}

for i, cfg in enumerate(device_to_base_configs["cuda"]):
    print(f"Config {i}:")
    print(f"  packing_format: {cfg.int4_packing_format}")
    print(f"  choose_qparams: {cfg.int4_choose_qparams_algorithm}")

I get:

Config 0:
  packing_format: Int4PackingFormat.PLAIN
  choose_qparams: Int4ChooseQParamsAlgorithm.TINYGEMM
Config 1:
  packing_format: tile_packed_to_4d
  choose_qparams: Int4ChooseQParamsAlgorithm.TINYGEMM
Config 2:
  packing_format: tile_packed_to_4d
  choose_qparams: hqq

and it seems to be running locally, the test_awq_functionality works for the hqq algorithm as well

jerryzh168 avatar Oct 10 '25 17:10 jerryzh168

for the script:

import torch
from torchao.quantization import Int4WeightOnlyConfig
from torchao.utils import _is_fbgemm_gpu_genai_available, torch_version_at_least

devices = ["cpu", "cuda"]
device_to_base_configs = {
    "cuda": [
        Int4WeightOnlyConfig(group_size=128),
        # Note: the functionality unit test doesn't work for hqq
        Int4WeightOnlyConfig(group_size=128, int4_packing_format="tile_packed_to_4d"),
        Int4WeightOnlyConfig(
            group_size=128,
            int4_packing_format="tile_packed_to_4d",
            int4_choose_qparams_algorithm="hqq",
        ),
    ],
}

for i, cfg in enumerate(device_to_base_configs["cuda"]):
    print(f"Config {i}:")
    print(f"  packing_format: {cfg.int4_packing_format}")
    print(f"  choose_qparams: {cfg.int4_choose_qparams_algorithm}")

I get:

Config 0:
  packing_format: Int4PackingFormat.PLAIN
  choose_qparams: Int4ChooseQParamsAlgorithm.TINYGEMM
Config 1:
  packing_format: tile_packed_to_4d
  choose_qparams: Int4ChooseQParamsAlgorithm.TINYGEMM
Config 2:
  packing_format: tile_packed_to_4d
  choose_qparams: hqq

and it seems to be running locally, the test_awq_functionality works for the hqq algorithm as well

Yeah it returns no fail for test_awq.py locally. We don't have to debug them, and can just add that test case, right?

Hmm... I truly remember getting the following error:

torchao/prototype/awq/core.py:91: in calculate_qparams
    q_out = F.linear(acc / scales, w, self.bias)
    ↓
torchao/quantization/quantize_/workflows/int4/int4_tensor.py:161
    res = torch.ops.fbgemm.bf16i4bf16_rowwise(...)
    ↓
RuntimeError: cutlass cannot initialize

But everything is resolved without any change...? Mamma mia :confused:

namgyu-youn avatar Oct 10 '25 17:10 namgyu-youn

for the script:

import torch
from torchao.quantization import Int4WeightOnlyConfig
from torchao.utils import _is_fbgemm_gpu_genai_available, torch_version_at_least

devices = ["cpu", "cuda"]
device_to_base_configs = {
    "cuda": [
        Int4WeightOnlyConfig(group_size=128),
        # Note: the functionality unit test doesn't work for hqq
        Int4WeightOnlyConfig(group_size=128, int4_packing_format="tile_packed_to_4d"),
        Int4WeightOnlyConfig(
            group_size=128,
            int4_packing_format="tile_packed_to_4d",
            int4_choose_qparams_algorithm="hqq",
        ),
    ],
}

for i, cfg in enumerate(device_to_base_configs["cuda"]):
    print(f"Config {i}:")
    print(f"  packing_format: {cfg.int4_packing_format}")
    print(f"  choose_qparams: {cfg.int4_choose_qparams_algorithm}")

I get:

Config 0:
  packing_format: Int4PackingFormat.PLAIN
  choose_qparams: Int4ChooseQParamsAlgorithm.TINYGEMM
Config 1:
  packing_format: tile_packed_to_4d
  choose_qparams: Int4ChooseQParamsAlgorithm.TINYGEMM
Config 2:
  packing_format: tile_packed_to_4d
  choose_qparams: hqq

and it seems to be running locally, the test_awq_functionality works for the hqq algorithm as well

Yeah it returns no fail for test_awq.py locally. We don't have to debug them, and can just add that test case, right?

Hmm... I truly remember getting the following error:

torchao/prototype/awq/core.py:91: in calculate_qparams
    q_out = F.linear(acc / scales, w, self.bias)
    ↓
torchao/quantization/quantize_/workflows/int4/int4_tensor.py:161
    res = torch.ops.fbgemm.bf16i4bf16_rowwise(...)
    ↓
RuntimeError: cutlass cannot initialize

But everything is resolved without any change...? Mamma mia 😕

yeah it runs for me locally without any changes. not sure why you get the error, seems like some env issues, are you using an H100 machine? fbgemm is only available in H100.

jerryzh168 avatar Oct 10 '25 17:10 jerryzh168

yeah it runs for me locally without any changes. not sure why you get the error, seems like some env issues, are you using an H100 machine? fbgemm is only available in H100.

Yes I use H100 for only allocated time (luckily now) and mostly use A100. Updated title and context for this change.

namgyu-youn avatar Oct 10 '25 18:10 namgyu-youn

CI failure (test/quantization/pt2e/test_quantize_pt2e_qat.py) looks unrelated.

namgyu-youn avatar Nov 09 '25 07:11 namgyu-youn

@pytorchbot label "topic: not user facing"

namgyu-youn avatar Nov 16 '25 14:11 namgyu-youn

can you rebase? I've seen the CI errors before but they should be fixed now I feel

jerryzh168 avatar Nov 21 '25 05:11 jerryzh168

Sorry I used wrong git command, fixing it.

namgyu-youn avatar Nov 21 '25 05:11 namgyu-youn

@jerryzh168 fixed rebase, please take a look at it.

namgyu-youn avatar Nov 21 '25 05:11 namgyu-youn