ChatGLM-Tuning
ChatGLM-Tuning copied to clipboard
用测试数据训练时bitsandbytes报的错,有大佬知道是什么回事吗
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/nvidia/lib64'), PosixPath('/usr/local/nvidia/lib')}
warn(msg)
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /usr/local/nvidia/lib:/usr/local/nvidia/lib64 did not contain libcudart.so as expected! Searching further paths...
warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/cuda/lib64')}
warn(msg)
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)!
warn(msg)
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 116
CUDA SETUP: Loading binary /opt/conda/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
/opt/conda/lib/python3.10/site-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
Explicitly passing a revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
The argument trust_remote_code
is to be used with Auto classes. It has no effect here and is ignored.
Overriding torch_dtype=None with torch_dtype=torch.float16
due to requirements of bitsandbytes
to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning.
Loading checkpoint shards: 0%| | 0/8 [00:01<?, ?it/s]
Traceback (most recent call last):
File "/home/python/finetune.py", line 162, in
我也遇到了,参考这个bitsandbytes的issuehttps://github.com/TimDettmers/bitsandbytes/issues/156还没解决
我也遇到了这个问题,在官方文档上发现 :The bitsandbytes library is currently only supported on Linux distributions. Windows is not supported at the moment. 因此 bitsandbytes 的优化方式无法在win系统上使用,至今没有解决。换成 linux 系统后应该就可以了。如果有在 win 系统上处理过的大佬,麻烦解答一下。
https://github.com/TimDettmers/bitsandbytes/issues/156
我是windows11,cuda降级到11.6,参考这个issue解决了(我不会c++,只好用别人编译好的116版本dll) https://github.com/TimDettmers/bitsandbytes/issues/30#issuecomment-1257676341