ChatGLM2-6B
ChatGLM2-6B copied to clipboard
NameError: name 'round_up' is not defined[BUG/Help] <title>
Is there an existing issue for this?
- [X] I have searched the existing issues
Current Behavior
p-tuning method is used to tune the ADGEN dataset. The default super parameters are used. so how to fix it?
delete " –quantization_bit 4 " works for me. but i need to use it with quantization.
Expected Behavior
No response
Steps To Reproduce
bash train.sh
Environment
- OS:Linux
- Python:3.8
- Transformers:4.30.2
- PyTorch:2.0.0
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :True
Anything else?
No response
Quantization rely on cpm_kernels
. Install it by:
pip install cpm_kernels
pip install cpm_kernels
it works. Thanks a lot
Thanks!
Quantization rely on
cpm_kernels
. Install it by:pip install cpm_kernels
It really works. Thanks.