QQQ icon indicating copy to clipboard operation
QQQ copied to clipboard

rotation+gptq data

Open Andy0422 opened this issue 1 year ago • 7 comments

Hi,

Can you share the rotation+gptq ppl data? is it better than smoothquant+gptq? Many tks!

Andy0422 avatar Oct 11 '24 10:10 Andy0422

Ref to https://github.com/HandH1998/QQQ/issues/13#issuecomment-2319955934. In my practice, rotation+gptq is generally better than smooth+gptq for per-channel quantization. However, this is not the case for some models, such as https://github.com/HandH1998/QQQ/issues/17.

HandH1998 avatar Oct 12 '24 02:10 HandH1998

@HandH1998

Hi,thank you for your kindly help. I encountered another problem with the calibration data,

from my test result as following, the results with wikitext2 seems ok, and the results with pile calib dataset is not aligned with your original data. The pile data I used in from https://huggingface.co/datasets/mit-han-lab/pile-val-backup/tree/main, could share your pile dataset for me? or share your comments on this finding. email: [email protected].

Granularity Method Llama-2 Wikitext2 Pile paper data
per-channel smooth+gptq 7B 5.98 6.14 5.95
per-group smooth+gptq   5.71 5.78 5.71

Andy0422 avatar Oct 14 '24 02:10 Andy0422

@Andy0422 We used pile for smoothing and wikitext2 for gptq in our paper. But the current code has fixed this issue to use the same dataset for both smoothing and gptq. So it is normal that you cannot reprocude the results of our paper using the latest code. It is not relevant with the pile data.

HandH1998 avatar Oct 14 '24 11:10 HandH1998

@Andy0422 We used pile for smoothing and wikitext2 for gptq in our paper. But the current code has fixed this issue to use the same dataset for both smoothing and gptq. So it is normal that you cannot reprocude the results of our paper using the latest code. It is not relevant with the pile data. @HandH1998 okay, see... So do you think our test results is correct ? Thank you!

Andy0422 avatar Oct 14 '24 14:10 Andy0422

@Andy0422 It is probably correct.

HandH1998 avatar Oct 15 '24 08:10 HandH1998

@Andy0422 It is probably correct.

@HandH1998 One more question, do you employ the online Hadmamad transform before the down_proj or ignore all the online transform in your implementation? If yes, do you evaluate the overhead in inference? Thanks~

Andy0422 avatar Oct 21 '24 04:10 Andy0422

@Andy0422 I don't employ the online Hadamard transform.

HandH1998 avatar Oct 22 '24 11:10 HandH1998