Younes B
Younes B
can you instead try to load the model in fp32 and enable mixed precision training with `fp16=True` in `TrainingArguments`?
related to: https://github.com/casper-hansen/AutoAWQ/issues/367#issuecomment-1986243134 can you try to import torch before importing awq?
Thanks @akx ! If I understand correctly this is only for python == 3.10 & 3.11 ? For older python versions we need to build separate wheels right?
Thanks a lot for the report @wkpark ! Out of curiosity would you mind also running the transformers integration tests? 🙏 First git clone: https://github.com/huggingface/transformers.git Then run: `RUN_SLOW=1 pytest tests/quantization/bnb/test_4bit.py`
Thanks a lot for running the tests ! Hmmm, I think you might did not installed transformers from source, can you try to build transformers from source ( `pip install...
Interesting, the great news is that only the serialization tests are failing, can you try to update `accelerate` ? `pip install -U accelerate` this might fix the failing tests
Can you in addition to that run the 8bit tests? 🙏 `RUN_SLOW=1 pytest tests/quantization/bnb/test_mixed_int8.py`
AMAZING @wkpark ! 🎉 For the 8bit tests the quality tests are expected to not pass, don't worry about them
@akx thanks ! I meant for the transformers repository not for the slow tests in bnb repository (I think you meant here the slow tests for bnb no?)
Hey @Lavenderjiang Thanks a mile for reporting! The links has been fixed and they are now public on the blogpost Link to PR: https://github.com/huggingface/blog/pull/927